Ubiquiti Unifi Networking Home Implementation and Test-drive

Standard

My previous network setup had served its purpose and it was time for a change. I had previously installed a Palo Alto Networks PA-220 for my network with standard L2 switches hanging off of it to support my connectivity requirements. Since the licensing on my PA-220 had expired and I couldn’t get it renewed for a decent price, and I was also having some wireless issues with my old Linksys router that I had turned into a bridged access point I decided it was time to cut my teeth into Ubiquiti’s Unifi networking systems.

I had seen Unifi at my work where they used it for some of their small remote offices. It worked really well and seems like great equipment. I picked up two Unifi nanoHD WAPs as well as the Unifi Dream Machine Pro. These all installed very quickly and it wasn’t too terrible to get it done.

The UDM nestled above my Netgear 24-port switch

The UDM was the most troublesome to install and actually a little underwhelming for me. Come to find out, the ports are basically accepting all network traffic with whatever tag is sent to them in 802.1q. So getting the trunk ports setup was easy. But the part that kind of let me down was the firewall and routing portion. If you have a static IP allocation from your ISP to provide internet exposed services you will not be able to continue doing so in its current revision. It has port forwarding which allowed me to continue doing some of my services, but they do not have a NAT table for you to modify which is a big let down. They really need to add the capability to create 1-to-1 NATs so if you are given a block of static IPs from your ISP, you can configure them to represent specific services you have in your network.

The firewall configuration seems more like you are modifying an IPtables rather than creating a true firewall with next-gen capabilities which I’m surprised they don’t tie into. The UDM can identify traffic based on application and purpose. Why not have that as part of your firewall configuration too? It’s relying on port based configuration which is 90s and early 2000s technology. We are in 2021 now, this should be up to snuff at that point.

The deep packet inspection and threat management is pretty lame as well. The threat management is basically a geolocation blocking mechanism… It does have some IPS/IDS capabilities, but I don’t see where you can review what signatures they have, how to modify those signatures as needed or even add your own. Where are they getting their signatures? What are they able to alert on? Really, I guess I can’t complain too much for the price without any yearly renewal to keep the support. I have to keep that in perspective. If you were looking for some kind of proxy capability as well to block certain categories of websites, you’ll also be underwhelmed. They do block “explicit”, pornographic, and malicious domains as well as set YouTube and search engines to Safe Mode. But it doesn’t help you in saying you want to block other types of categories if you need to. Again… perspective. This only costs $379, so you probably aren’t going to get huge gains in security levels, especially with this thing being capable of high throughputs even with all of the bells and whistles turned on.

I will say, some things that surprised me that are helpful are the vulnerability scanners and the honeypot. It’s nice to get some very basic scans of my hosts to know what they are and have and inventory. Granted, it does not really provide you a “vulnerability” list as I’m used to as it seems to be more of an enumeration scan than a vulnerability scan. The honeypot is interesting to see what things are scanning your network. I haven’t played with it enough yet to provide any useful feedback on the honeypot, but it is a great idea and I hope it turns out well. I don’t know if the honeypots tries to mimic any specific type of OS or if it is just listening for any and all network connections trying to hit it. We’ll see…

All in all, I’m not entirely dissatisfied with the products. They provide the connectivity that a remote office would need and give you a very basic level of security. And for what its price point, it kind of what I expected. I did expect NAT’ing which that was a huge let down but hopefully they will add that capability in the future to modify the NAT tables.

The WAPs were very nice though. They reminded me very much of the Cisco APs that we installed at work back in 2012. But they are extremely well priced for what you get. The placement was easy, it came with a PoE injector, and it can be controlled by the UDM very easily.

Our bedroom AP installed and running!

The cool functionality that I loved from the UDM/Controller (really the controller, you don’t need a UDM to get the controller software which is free and can be installed on any Windows device) You can upload a drawing of your floorplan, add walls of differing types (brick, drywall, glass, etc) and then place your APs on the map to get a rudimentary site survey style heatmap.

Heatmap of our house with the APs installed

The UDM does come with “AI” to control the access points so that you get optimal coverage. I haven’t really experimented with that yet, so we’ll see what that truly means by the word “AI.” Alot of vendors are using that term now just because it is the latest buzz word.

I will hopefully update you down the road on how well the Unifi network equipment works in a home environment.

IT Automation – The Good, the Bad, and the Awesome?

Standard

How many times have you been faced with the same boring process of onboarding a new user, or building a server that was just like the last hundred that you built? And how did you feel the last time you built a script that does a repetitive task for you and saved you time? Both of these you would think are a dichotomy between long boring repetitive tasks and drudgery to the satisfying and pleasing knowledge that you now have more time on your hands for other things. I suggest these are just two points on the same path. One is recognizing a problem and the other is the resolution. Automation, when used properly, is an efficient and cost-effective solution to tasks that are repetitive and ongoing.

Automation has gotten a bad rap in recent years. Some people believe it as a tool to start the outsourcing procedure. Others think it will decrease personnel knowledge on how to perform these tasks themselves. These can be valid points at times, but I can bet that almost all of us at some point in our careers has found a process that we found painful, slow, error prone, and repetitive and turned it into a scripted process that saved time, money, and especially our own sanity.

Our time is precious in our chosen careers and we are constantly being asked to do more with less. When I first heard that idea I thought “How could you possibly do more with less?” So, being the young naive person that I was at the time, I just worked longer hours. A few years after I started down that path, I hit a wall. That wall was a complete dissatisfaction in my job and where I felt I was going. That’s when I started working on automation. They were just simple scripts at first, but these scripts allowed me to cut my time to deploy infrastructure in half and make my work more consistent.

One of these scripts was a simple PHP script which took variables I gave it, and returned a drop in configuration that followed the standards for each of our devices and regions. I would input things such as device name, region in which the device was to be deployed, interfaces and their IP configuration, and more. From that it would give me a text file in which it had the standards applied to each interface, global configuration option in which were our standards

Another script that simplified my life was our firewall rule automation script. This two part script would first be run during the day with all of the firewalls that had changes. It would verify that the compile of the rule base would complete without any errors, and it would then schedule the second part of the script to automatically push the rules at a specified time during that following night. It would email out results about the policy push to our network operations center, so that if there were any errors they could begin troubleshooting procedures and make any possible calls outs. This saved our engineers an hour each night for policy installations.

Each business and professional has different needs for automation. Some of these include, but are not limited to the following:

  • Repeatable Outcomes
  • Process Standardization
  • Faster Delivery
  • Increase Customer Satisfaction
  • Reduce Errors
  • Reduce Costs
  • Increase Employee Availability

Now, not everything can be automated. Automation takes time for the process discovery, knowledge on how to best perform the task, and the skills to program. Before you can automate a task, it needs to be a clearly defined framework. I don’t mean at a high level. It needs to be defined at each and every step what is expected to be done and the possible outcomes. For example, let’s go back to my new user example. Every new hire needs certain things; a laptop/desktop, network logon, access to all the appropriate systems, an email account, remote access VPN, etc. This is a clearly defined process that can be automated. Automating this allows a user to be functional within a day to possibly a few hours, with most of the work being paper trail for auditing purposes.

Cost is the biggest reason for automating and should be the first consideration when you are looking into it. The benefits need to outweigh the costs of the implementation and maintenance of these processes. Maintenance can be costly if you are constantly making changes to your processes, because it requires keeping people who are skilled enough to understand the process and how to program it so that it functions properly. On top of that, automation requires a higher level of quality assurance testing to be done to assure that the outcomes are what you’re expecting. Remember that when you automate a task, most people are expecting as good as or better than their human counter parts.

Do I think automation is awesome? Do I think automation is the miracle cure for IT woes? Yes, No, and not really. It depends on the situation, as some automation can just add complexity if not done properly. But with proper understanding and investigation as to what you can automate in your infrastructure, you can help your company save money, respond with agility to changing markets, and improve your own work experience. Doing so will allow you to expand your horizons to the new and exciting things that are coming out in IT every day.

Home Network Expansion

Standard

As part of my role in teaching at BYU-I, I teach an Introduction to Networking class. During the first few weeks it talked around physical network architecture, design, and documentation. Long story short, this part of IT isn’t talked about much and it’s not an easy concept to grasp. Since I had just recently moved and was not satisfied with the current infrastructure in the house, I decided to document the process for the benefit of my students. Below are pictures of the process. Enjoy!

The current status of things… Not pretty at all…
I had 8 existing cables and I needed to add another 8 to cover the areas I needed network connectivity. I decided to use a 24-port switch in case I expand in the future
The 3U wall-mounted bracket and the 24 port patch panel are in!
The equipment is rearranged and installed into the bracket. Had to keep the network up while I worked or I would have had an angry family 🙂
Closer picture of the actual bracket/rack.
The low voltage old-work single gang boxes I installed in my marked locations
Cat5e cable in bulk. Only needed 1000ft to get the extra runs done.
These were the previous cable hangers they used. Never seen these before but they worked pretty well!
These were some really cool zip ties with holes in them that you could nail up and loosely zip-tie wires together to suspend them. Pretty cool!
These were the “easy” keystone jacks. They were ok, but I wouldn’t say they were “easy”
Backside of the wall plate with the keystone snapped into place
One wall plate finished. Only two more to go.
All of the runs fit through the original installation hole luckily. 8 new runs done with 2 2-port plates and 1 4-port plate.
Punchdowns underway on the patch panel!
Final product!

I have three more runs that I need to finish but those runs would go through a firewall next to the fireplace and I didn’t feel like messing with that just yet. But, that’s another 4-port panel I want to get done. It already has a single drop but with a TV, AppleTV, Blue-ray player, and an Alexa all needing network connectivity there, I thought it would be best to get them all individual runs. Right now I have a switch hanging off that drop to give them the connectivity they need but I’d like to get rid of that switch and just have one switch for the entire house.

Cisco Express Forwarding Concepts

Standard

Cisco Express Forwarding (CEF) is a Cisco proprietary technology that allows their platform to move the forwarding decision from CPU down to hardware Application-Specific Integrated Circuits (ASICs).  The process of programming these ASICs requires two tables of information: the Forwarding Information Base and the Adjacency table.

The RIB starts construction in the control plane with the IOS software collecting all of the routes from the locally configured static, connected, and dynamic routing protocol (EIGRP, OSPF, BGP, ISIS, RIP) routes.  All of this information is collected and stored within the Routing Information Base (RIB) for each routing process on the device.  An example of a RIB would be the OSPF database.  The control plane does not eliminate routes at this point.  The RIB will contain all of the routes that it learns, regardless of whether they are the best route to the destination or not.

An LFIB is the forwarding information base for MPLS labels.  This is not utilized often in environments, but it is still an information source utilized by CEF to create its final table for making its decisions.

The router now goes through the selection process from each of the associated route types and their administrative distances to select which of all of the routes should go into the Forwarding Information Base (FIB).  Each time the RIBs are updated and a change needs to occur to the FIB, this update will occur.

At the same time each of the routing processes are compiling their information and are performing calculation on which routes are the best out of their RIB, the device is also building an adjacency table with layer 2 address to interface mapping.  ARP would be the most common example of this, with a MAC address being correlated with a specific physical interface.

With the two tables formed with information, CEF will now go through each forwarding entry and tie it to an adjacency entry.  This is then installed in the actual CEF table which is either placed in the lower level code for virtualized routers, or places it within actual ASICs in certain hardware platforms.

Some platforms are capable of performing Distributed CEF.  This would be useful in the case of the Catalyst 6500 series multicard chassis design.  Instead of sending the packets to the CEF ASICs in a centralized location on the supervisor cards, it can make these decisions locally on each line card, thus providing a linear scaling capability for each line card installed in the chassis.

CEF supports more than just Ethernet.  It can support High-Level Data Link Control, Tunnels, and PPP just to name a few that are more common.

In the event that Equal Cost Multi Pathing (ECMP) is enabled and multiple routes to the same prefix are available, CEF has a few options for load balancing.  CEF provides two options for load balancing: per-destination and per-packet.  Each has its own specific strength and weakness which I’ll discuss here.

Per-destination load balancing is the default behavior for CEF.  I know that it states as “per-destination,” but what it is really looking at is the source and destination pairing.  The reason why this is the default behavior is to allow for all of the packets to arrive in sequence, as some applications have a hard time recovering from out of order packets.  This is typically the most common configuration scenario.  There are a few reasons you would want to change this.  One scenario is where you have a high bandwidth application between two hosts.  These hosts will tend to cause a situation we call CEF polarization.  You will see symptoms of this when you look at the route options and seeing one path being more highly utilized than the other options.  The reason why you see this is due to the fact that the CEF load balancing algorithm only looks at the source and destination IPs, which if that does not change much for your traffic flow, you will have unequal utilization of paths.

A way to correct this is to include layer 4 information in the routing decision.  This might allow the polarization to be less noticeable if the traffic has multiple sessions to between the same two hosts which would utilize different source ports at the least, and sometimes it might even have different destination ports depending on the application.

If this does not alleviate your issue, the other option is to implement per-packet load balancing.  And this is just as it sounds.  Each packet will require CEF to select a different route by the use of a round-robin selection method.  This ensures that you have a more equalized utilization of the routing paths, but it still will not be perfectly equal due to the variability in packet sizes.  But as I stated earlier, this creates a scenario of packets arriving at the destination out of order and if the application does not like it, this can cause you problems.

References

Decryption with PFS – Palo Alto Firewall

Standard

I host a few TLS encrypted websites at home, and as part of my recent lab testing, I noticed that Palo Alto supports PFS based decryption. I’ve had many open source tools previously that noted I was constantly under a barrage of attacks on my web server. Decrypting this traffic and seeing what is happening under the secrecy of encrypted traffic would come a long way in deterring and even preventing intrusions into my web server.

As I implemented the policy, I noted handshake failures during the negotiation with the error “decrypt-error” and “decrypt-unsupport-param” which wasn’t very helpful. The client browser would give the error (this is in Chrome) “ERR_SSL_PROTOCOL_ERROR.” I captured a tcpdump on the firewall and examined the handshake which gave me little evidence at first as to what was going on. The only piece of evidence it gave me was the cipher it selected, and the error “Alert (Level: Fatal, Description: Handshake Failure).” I had a hunch that something was wrong with the PFS decryption so I modified my web server configuration to remove the PFS options. Sure enough, that allowed the firewall to decrypt the traffic, but I was having sporadic issues accessing the sites.

Just to make sure I was sane, I double checked the Palo Alto Perfect Forward Secrecy (PFS) for Inbound SSL Sessions documentation just to make sure I had everything set properly. I was running 9.0.3 code on the firewall. I checked my decryption profile and that the DHE and ECDHE key exchange algorithms were selected.

I then ran through the basic Configure SSL Inbound Inspection documentation. I was running a layer 3 firewall and the certificate was imported properly with the intermediate CA attached to the certificate chain. I checked my decryption rules, but I had it misconfigured here. My traffic is hidden behind a single public IP address. I’m not given more than the one by my internet service provider. As part of this, I perform port translation inbound. I found the following article, which states that I should create a separate URL category for each website I host on this server and use that in my decryption rule. I will say, it was a little ambiguous as to whether I should use the IP address in the destination or not as they have a note stating:

“if you are hosting multiple servers on the same machine 1.2.3.4 (same IP), then make sure that the SSL decryption policies are not configured with IP address as match condition.”

But the examples they have below still have the destination IP address listed. I tried it both ways just to test with no effect. I will say, I had the same error they listed in the article and it went away after putting the custom URL category for each website in the decryption profile, which resolved my issue of the sites being sporadically unavailable.

Now I just had one more topic to handle, which was being able to reenable PFS for my websites. I decided to validate if that cipher the server selected was supported by the firewall. TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 was the method selected, and according to the PAN-OS 8.1 Decryption Cipher Suites documentation, it was supported. I double checked my imported certificate with the certificate and CA bundle I was given from GoDaddy. The CA bundle was different than what I had added to the certificate originally. So I replaced that certificate and committed my change. This time I got a different error message from the client browser. It was a ERR_CONNECTION_CLOSED with the same error messages in the traffic log. I tried removing the Decryption profile from the rule with the same effect.

After reviewing the output of “show counter global filter delta yes” I found the following entries which did not look promising:

name                                         value     rate severity  category  aspect    description
 proxy_reverse_unsupported_protocol           38        4    warn      proxy     pktproc   The number of sessions failed for reverse proxy because of ssl protocol
 proxy_decrypt_unsupport_param_overall        38        4    info      proxy     pktproc   Overall number of decrypted packet unsupport param failure

Searching for those specific errors, I came across this KB article from Palo Alto Networks entitled How to Identify Root Cause for SSL Decryption Failure Issues. It appears that even though the Palo Alto supported ciphers for 9.0 list the specific cipher I was using, it apparently does not fully support it. In this packet capture, you can see where the firewall sends a TLS fatal alert to both the client and the server to terminate the session. So if you are looking to only use modern cryptographic algorithms, it appears the Palo doesn’t quite support this yet.

Firewall resetting the session due to unsupported encryption ciphers
Support list from Palo, and what was negotiated during the TLS session
Final log result showing the decrypt-unsupport-param message

Palo Alto – Security Profiles

Standard

While starting up my lab environment and getting the basics configured, I was perplexed by the feature of a “Profile” that can be attached to a policy rule. You could either select a security profile group, configure one on the spot (which doesn’t scale well if you plan on writing these same protections for multiple rules across multiple policies) or leave it as none. Of course, since I was just trying to get the lab up I left it as none for now and decided I would come back to this later to check it out.

What I discovered from these security profiles is that they provide us with a great tool to create custom protection policies for the scenario. Iron Skillet goes over three basic scenarios that we should start off with including an outbound, inbound, and internal protection profile option. This allows us to take a risk based approach to the protections we want to apply based upon the scenario of the rule. I know I quickly enabled an inbound rule to cover my web server with DDoS and IPS protections with the SSL Inbound Inspection configured. This allows me to virtually patch my server on the firewall if there’s not yet a desirable patch for it yet. I am also able to write specific protection scenarios for outbound as well. I wrote where I can perform antivirus scans, URL filtering, and anti-spyware scans so that I could protect my lab from becoming infected. I also wrote a rule to cover the internal access so that I could perform very basic checks of antivirus and anti-spyware, but everything else was allowed.

You would only want to use these protection profiles for permitted traffic. Writing it for traffic you plan on dropping won’t do you any good as it will bypass the inspection engines anyways to go straight to the drop action within the firewall. As I stated earlier, you also want to create these as reusable profile groups with a defined purpose in its title. This provides you with a scalable solution that you can quickly modify by just modifying the attached security profiles within the protection profile and it automatically applies to all of the rules utilizing it.

Each profile can contain the following security profile types:

  • Antivirus
  • Anti-spyware
  • Vulnerability Protection (IPS)
  • URL Filtering
  • File Blocking
  • Data Filtering
  • WildFire Analysis

If you haven’t, check out Iron Skillet on GitHub to see what the starting configuration should be and sit down with your teams to discuss the scenarios you would need to cover. My scenario was fine with the Iron Skillet basic configuration, but I could see scenarios of Third-Party access, Customer Access, Restricted Data Classification Access and more being possibilities depending upon the network architecture.

Securing the Internet of Things – Why it’s important and how to implement

Standard

Olan F. Hodges

Brigham Young University – Idaho

Author Note

Olan F. Hodges, Department of Computer Information Technology, Brigham Young University – Idaho

 

 

Abstract

With the reduction in price and increase of processing power for micro-controllers and silicon based products, industries across the globe are beginning to innovate around these devices to improve their service and products.  With this rapid expansion of portable, cost-effective computing has come a myriad of problems regarding security and privacy.  The pressure to keep costs down, the short amount of time in development, along with the lack of any best practices around their development and lifecycle management has led to the point where these devices are being manipulated, ransomed, and exploited for other malicious purposes.  These purposes can range from ransom of the effected device, acting as a proxy for malicious payloads for the bad actors, to amassing these devices into a botnet army to attack any internet connected computer system in the world.  With proper research, accreditation, training, funding, and legislation, this problem can be solved.  This paper explores the benefits and risks of the Internet of Things (IoT) and how organizations, IT professionals, and governments alike can implement their use in a safe and successful manner.

Keywords:  Internet of Things (IoT), best practice, dark net, cloud computing, botnet, malware, ransomware, Trojan, constant glucose monitor (CGM), distributed denial of service (DDoS), domain name system (DNS), Health Insurance Portability and Accountability (HIPAA), Payment Card Industry (PCI), Command & Control (C2), Enterprise Resource Planning (ERP)

Securing the Internet of Things – Why It’s Important and How to Implement

Computer infrastructure has given mankind the capability to achieve goals beyond our wildest imaginations.  Computing has brought us the capability to successfully navigate space, manufacture items with tolerances down to nanometers, and bring information and education within reach of billions.  Computers work better in a group, otherwise known as a network.  Within this network, computers can share data and resources amongst themselves to provide better capabilities for us to harness.  The Internet of Things (IoT) is the next evolution of this giant network we call the Internet.  Maciej Kranz stated in a recent blog post on Cisco.com that “the energy and momentum that are building today around IoT are reminiscent of the early days of the Internet, when we were just beginning to realize its potential impact on business and society. We felt like we were changing the world.” (Kranz, 2016)

IoT Benefits

What is the big deal of having all of these devices connected to each other?  What benefits can we achieve by expending billions of dollars in development and infrastructure, just so my toothbrush can talk with the internet?  We are just now beginning to learn the possibilities of this connected ecosystem of electronics and what they can do for us.  The cost of components and their size has decreased overtime where even home hobby hackers are buying IoT components and developing new IoT innovations in their own garage.  This opens up a new frontier of electronic devices and the benefits that they can bring to our lives.  NCTA – The Internet & Television Association has brought together an infographic showing the past and projected growth of the size of the internet which should give us a small insight into how much innovation we should expect to come from this industry. [Figure 2]

Data Acquisition

Data acquisition was a main driver at the beginning of the IoT evolution.  Medical systems are being developed and tested that allow users to monitor their conditions and give them specialized alerts based on the status of their vital signs.  A specific instance of this is diabetes monitoring.  The company Insulin Angel has developed a simple sensor that can be attached to insulin medication to track its environmental conditions.  It is necessary to store insulin at a specific temperature to keep the shelf life of the dosage.  With this sensor it can track the ambient temperature of the environment in which the insulin is stored, allowing alerts to be sent to the patient, caregiver, parent, or medication provider allowing them to save millions of dollars of insulin from being thrown away due to poor storage conditions. (Scorxton, 2015)

Along the same lines, the company Dexcom has created a constant glucose monitor (CGM) that can be implanted under the patient’s skin allowing it to constantly check the patient’s glucose levels.  It reports this back to a mobile phone or device that the user has on their person.  It then reports back to a cloud provider so that doctors, caregivers, patients, and parents alike can view the data, receive alerts when their glucose is too high or low, and react accordingly.  This is an immense step forward for diabetics and the normal method of pricking the finger for blood on a scheduled basis.  The patient’s glucose levels could swing wildly between measurements causing them to struggle with managing their disease.  With this constant monitoring, the patient can keep their glucose level in check on a consistent basis, thus extending and improving their quality of life.

Process Improvement

The ability to improve the manufacturing process is another advantage of the IoT that corporations are trying to capitalize on.  Manufacturing automation has increased over the past thirty years. With the additional insight and tracking provided by these microcontrollers, corporations can now track each unit of production at every step of the process.  They can even track their supply chain from initiation to sale, how long it was at each step in the manufacturing process, and control the inventory of these products to reduce waste of manufacturing time and materials.  “Smart manufacturing is about creating an environment where all available information—from within the plant floor and from along the supply chain—is captured in real-time, made visible and turned into actionable insights. Smart manufacturing comprises all aspects of business, blurring the boundaries among plant operations, supply chain, product design and demand management. Enabling virtual tracking of capital assets, processes, resources and products, smart manufacturing gives enterprises full visibility which in turn supports streamlining business processes and optimizing supply and demand.” (O’Marah, 2015)

An example of manufacturing taking the benefits of IoT to their full potential was brought forward by Maciej Kranz in his blog post mentioned earlier.  He states, “Harley-Davidson connected its operations and reduced its build-to-order cycle from 18 months to two weeks, accelerated decision-making by 80 percent, and increased profitability by three to four percent.” (Kranz, 2016)  Not only did this improve cost efficiencies on the production line but this also improves Harley-Davidson’s brand with their community.

Control Network

Transportation has also benefited from the IoT.  Driverless cars are becoming reality with a plethora of sensors that can be tied into microcontrollers built into cars.  They then communicate across the cellular network and localized methods to other cars in their vicinity to direct the flow of traffic with no intervention from the driver.  When you get home your house is also becoming more intelligent each day with the use of smart thermostats, smart security systems, smart assistants (such as Amazon Echo), and smart TVs that can now access internet based content such as entertainment providers like Netflix, Hulu, and others.  Even smoke and carbon monoxide sensors are becoming intelligent and can contact the local fire department through the internet while at the same time sending you an alert through your phone or by other means.

A real life example for the benefits of the IoT comes from PepsiCo.  PepsiCo is beginning to utilize an army of IoT devices to track crops, shipments, and more in what it calls “The Digital Value Chain.”  Flying drones with sensors utilized to determine moisture and fertilizer saturation are being utilized in an automated fashion to track the state of crops and allows them to efficiently utilize their resources in growing the most effective crop of materials possible to create their products.  Mobile sensors are utilized in their shipments to track each palette of product to enhance efficiencies in supply chain control.  This cuts down on wasted product, provides for a fresher product, as well as delivering it as close to the moment of purchase as possible. (Banker, 2015)

IoT Issues

The power of the IoT is tremendous, but so is its potential for malicious attacks.  Over recent years these devices have demonstrated the amount of damage they can do to the internet let alone companies and individuals who are victims of these attacks.

Botnets & Attacks

Many IoT devices are vulnerable to basic attacks allowing “Distributed Denial of Service (DDoS) botnets to amass up to a million devices.” (Arbor Networks, 2016)  There are many botnets on the internet that are used for whatever purpose the highest bidder wants to execute.  These botnets go by the names of Mirai, Lizkebab, Bashlite, Torlus, and gafgyt.  Once these devices are recruited into botnets, they are sold on the dark net to do the buyers bidding for as low as $0.06 per bot for a time limit of two weeks.  They can be used for both volumetric and application specific DDoS attacks and any other devious activities the buyer has in mind.

One example of the devastation that can be caused by these botnets was this past October’s attack on Dyn, a national DNS service provider for companies such as Twitter, Netflix, Walgreens, GitHub, DirecTV, Ancestry.com, Zillow, and many others.  Each infected device was given directions to target DNS queries against Dyn’s Managed DNS solution globally.  These attacks came in waves and degraded or denied access to their service globally.  Due to the massive quantity of devices utilized in this attack, it was unlike any other and was an unprecedented attack which made finding the culprit devices difficult, as Dyn has survived other DDoS activities previously.  Level3 provided live maps of the outage which put into perspective the effects of the attack. [Figure 1] (York, 2016)

New devices are manufactured daily that can be recruited into these botnets as well.  Due to the lack of basic security standards and protections that have been taken for granted by the majority of industries globally, many of these devices are being implemented with default credentials, clear-text management communication, and software vulnerabilities that can be exploited to gain access into these devices and drop malicious payloads to control them.

IoT devices can be utilized for more than just botnets; they are also proxies for hiding a bad actor’s true identity, allowing them to perpetrate tax refund and credit card fraud as well as other cybercriminal activities with limited or no trace of their true identity.

Privacy Concerns

With IoT devices collecting, monitoring, storing, and transmitting large amounts of data regarding our personal lives, our own privacy is at risk as well.  As Dennis Fisher stated, “IoT devices are capable of collecting, transmitting, and sharing highly sensitive information about consumers’ bodies and habits.” (Fisher, 2016)  In a previous example of the CGM devices for diabetics, their personal and classified health information is subject to disclosure if these vendors and manufacturers of CGM devices do not provide due diligence in their architecture, design, and lifecycle management.  Health Insurance Portability and Accountability (HIPAA) regulation does not yet cover these specific IoT devices and the securities that should be implemented around them.  The Payment Card Industry (PCI) is a step ahead as they have been utilizing mobile card readers for over a decade, which the industry could learn from, but this industry has also had its issues with information disclosure.  The likes of the Target and Home Depot breeches have been seen in more and more cases where malware has invaded these environments to collect data.

Home monitoring, security systems, manufacturing sensors, and global tracking devices are all collecting sensitive data that when put into the wrong hands can provide a bad actor with information that could put both families and companies at risk.

Financial Loss

With IoT devices being integrated into assembly lines, manufacturing control, and even public works control systems, these devices can be hijacked and used as hostages to extort money from companies to allow their facilities to keep manufacturing their product.  Payment systems are also vulnerable as we have seen over the past decade with the previously mentioned breaches of Target and Home Depot which have exposed user payment card data to bad actors where the information was sold to the highest bidder.   The lost payment and PCI related data can then be used to commit fraudulent purchases, open credit cards or loans, and even commit tax refund fraud costing not only the effected individuals time and money to clear up their record, but it also costs banks and companies alike in the money lost with these fraudulent transactions.

Personal Risk

On top of the previous statement of financial loss in the public works system, these could even be utilized in hacktivism and terrorism plots to take down power, disturb or halt water treatment facilities, and (depending on how far these go into the medical field) end someone’s life.  The impact to personal lives depends on how far we take the IoT into our critical infrastructure and healthcare system and whether it is exposed to the internet.  In a whitepaper from thingworx written by Rob Black he calls out examples regarding IoT security where lives of individuals could be at risk.  “Security researchers recently demonstrated that they could remotely disable the wheels and brakes of a popular sports utility vehicle.  Students remotely took control of the pacemaker implanted in a robotic dummy patient used to train medical students and showed they could cause life-threatening injuries to or even kill a real patient if it had actually been implanted in one.  Hackers demonstrated the ability to take control of a Wi-Fi connected rifle to aim it at a different target or prevent it from firing.” (Black, 2016)

In each of these scenarios, the malicious entity could take advantage of these devices to hijack, ransom, or even kill those that they wish to target.  These are grave security concerns which the industry needs to take into consideration when creating their IoT devices.

Security Exploits

With an understanding of the benefits of IoT as well as what security risks are generally exposed, there are some vulnerabilities that have been exploited that point out specific security practices that should be formalized into the IoT industry to further increase our security posture.

Mirai Botnet

Mirai is an open source malware that preys upon IoT devices with lax security controls.  While not an exploit, this botnet is a list of known devices with usernames and passwords that are either the defaults or even hardcoded into the systems software.  Usernames and passwords are sometimes even hardcoded into the firmware of the device and is immutable or not resolvable without a full device replacement.  Mirai scans the internet from each infected device to spread itself even further across the internet, in the same fashion that worms used to spread across the internet. (Krebs, Source Code for IoT Botnet ‘Mirai’ Released, 2016)

Once these devices have been enrolled into the massive botnet, they report back to a central Command & Control (C2) server where it can receive its orders and in an orchestrated fashion with the rest of the botnet based upon the desire of the central authority.

The reason why these devices are easily preyed upon is not only because of the lack of security controls and auditing from the vendor’s perspective but it also boils down to the end users not seeing any impact to this exploited device.  There’s no visibility into the end host that would give them signs that the device is compromised like malware of old which could cause system lag and even abnormal behavior.  With this lack of visibility and lack of security awareness from the vendor’s perspective, this creates a large highly vulnerable mass of assets that are ripe for the taking.

Those responsible for the creation of Mirai have now opened sourced their software, giving anyone the power to create a botnet from the plethora of available, uninfected devices to perform their cybercriminal acts.

SSHowDown Proxy Attack

Akamai reported on SSHowDown Proxy Attack in mid-October of 2016 where they found multiple IoT devices performing a credential stuffing campaign on internet services.  Credential stuffing is a more sophisticated version of brute force attacking where you have a list of compromised accounts from previous attacks that you utilize to see if those same users have an account on your targeted application.

SSHowDowN is based off of a vulnerability in OpenSSH that was reported back in 2004 under CVE-2004-1653.  Ezra Caltum & Ory Segal from Akamai state, “We would like to emphasize that this is not a new type of vulnerability or attack technique, but rather a weakness in many default configurations of IoT devices.” (Segal, 2016)  The issue here was that TCP forwarding was enabled by default in OpenSSH and thus was enabled by default on many of these IoT devices that utilized this software.  With TCP forwarding, the malicious party was able to send their traffic encrypted from their source to the compromised IoT device.  It would then forward the inner packet on to the destination unchanged, thus hiding the original users source.

IoT manufacturers are failing to provide a simple, automated, and non-disruptive way to upgrade their devices.  IoT manufacturers are also failing to perform basic security patching during the production lifecycle of their products, rarely providing patches to end users either. (Krebs, IoT Reality: Smart Devices, Dumb Defaults, 2016)  With a simplified process and basic security assessments being performed on the software utilized by these IoT vendors, this issue could have been avoided entirely as they would have turned off this option in newer versions of OpenSSH by default.

Zombie Zero

In 2014, TrapX released a report regarding a suspected nation-state sponsored targeted attack against multiple logistics and shipping industries.  This malware was preloaded inside of a handheld scanner utilized in this industry for tracking inventory and shipping packages. Once these devices were connected to the corporate network they began a set of automated, polymorphic attacks to breach security at the company looking specifically for servers with any kind of financial information available to be captured and exfiltrated to the C2 servers abroad.

TrapX reported “Weaponized malware was delivered into customer environments from the Chinese factory responsible for selling a proprietary hardware/software scanner application used in many shipping and logistic companies around the world.

“The customer installed security certificates on the scanner devices for network authentication, but because APT malware from the manufacturer was already installed in the devices, the certificates were completely compromised.” (TrapX Security)

It continued to morph to bypass security controls until it had achieved its goal of finding the financial data it was looking for, which was then exfiltrated and utilized for unknown purposes now.

Security can be placed around these devices, but if they come implanted with malware to begin with, this provides no real security.  In the scenario from TrapX’s report, these devices were installed with authentication certificates to validate their authenticity on the network.  They were then placed within a trusted environment to report back to a financial Enterprise Resource Planning (ERP) system.

Auditing of the software/hardware utilized within our IoT devices needs to be scrutinized on a consistent basis.  An initial deployment might not be compromised, but subsequent patches or enhancements might be compromised.

HummingBad/HummingWhale

HummingBad and the subsequent variant HummingWhale are Android malware instances that infect their device and start displaying fraudulent ads that generate revenue for the perpetrator.  It does this all without needing to gain elevated privileges and also spreads itself by downloading additional software without the user’s awareness.

Check Point reported on this in their Threat Research column on January 23rd, 2017.  In that report they state, “Check Point researchers have found a new variant of the HummingBad malware hidden in more than 20 apps on Google Play. The infected apps in this campaign were downloaded several million times by unsuspecting users.” (Koriat, 2017)

Malware variants can come through any software we install, not just from the OS and default software we deploy as part of the device’s original intention. Consistent security scrutiny and application control needs to be exercised by companies distributing this software.  In this case, Google’s Play Store software validation failed its community by performing poor software security validation and selling/distributing this software to its customers.  Consistent awareness, scrutiny, and software validation must be executed on all software utilized in your IoT environment, not just the software you purchase as part of your original deployment.

Security Improvements

Each of the previous vulnerabilities calls out a subset of the factors that should be taken into consideration for improving the state of security around the IoT industry.  We will now go in-depth as to why each of the below changes I’m suggesting are necessary and how they can make a positive impact on our security for the future.

Standards and Guidelines

Many security experts have agreed that creating a security guideline and best practices specifically for IoT is required.  Europe is even working on making such a security standardization that will be required for manufacturers. (Krebs, Europe to Push New Security Rules Amid IoT Mess, 2016)

Even though this standard is not yet available, normal security best practices are still valid for these types of situations.  The Open Web Application Security Project (OWASP) creates a detailed and extensive list of web based application vulnerabilities commonly found across the globe.  They have proactively created a draft for manufacturers to utilize as guidance for their IoT security standard.  They range from basic encryption in transit, to logging and auditing features for these devices.  By utilizing even this basic list of security guidelines as a standard to build upon, we can remove a large majority of basic attacks which we’ve seen over the past decade as the IoT ecosystem has grown. (OWASP, 2017)

Manufacturers are beginning to see the financial impacts of their mistakes in recent months.  Specifically, the manufacturer Dahua has been at the forefront of these security weaknesses which are driving their customers to scrutinize Dahua’s products further and even sue for damages.  On top of these legal actions, they are also obligated to replace the effected devices as some of the vulnerabilities are hardcoded into the devices firmware, and cannot be remediated with a simple software upgrade. (Krebs, Europe to Push New Security Rules Amid IoT Mess, 2016)

Patching

Canonical, the developer of the widely known Linux operating system Ubuntu, recently performed a survey of customers regarding patching.  The results showed that nearly two thirds of customers believe that it is not their responsibility to patch their software, and rarely check for patches. (Rouffineau, 2016)

IoT devices need to update automatically and without service interruption.  This allows the end user to have a secure environment without having to manage it.

“One of the key security problems that researchers have cited with IoT devices is the impracticality of updating them when vulnerabilities are discovered. Installing new firmware on light bulbs or refrigerators is not something most consumers are used to, and many manufacturers haven’t contemplated those processes either.” (Fisher, 2016)

If you take into account the amount of time and resources it takes to update a single device, and multiply that by how many devices a single household will have by the year 2020, each person will have to handle roughly 3.4 devices worth of upgrades, software management, and general auditing to make sure their device is secure. (Prieto, 2016)  For the general user, this requirement is too much to ask of them as very few people understand the risks associated with unpatched devices nor do they feel they have the time to manage all of these processes.

Auditing

Software development will often include software libraries and add-ons that are developed and maintained by other organizations.  This not only reduces the overall development lifecycle for a product, but it also brings standardized protocols and functionality.  With these shared libraries also comes a shared risk for all of those involved with using them.

Continued review and understanding of what libraries/add-ons are integrated to the software needs to be executed by both the vendor and customer.  When OpenSSL released its vulnerability known as Heartbleed, many organizations were keen to the fact and updated appropriately.  But in the case of the poor default configuration in OpenSSH, this was not a widely known issue and vendors continued to utilize the vulnerable library in their applications, thus creating an inherent vulnerability in their own application.

Even if legislators do not come to an agreement as to what type of regulation they will apply to their devices, there should be an accreditation created among the IT security group (SANS, ISC2, etc.) that will give legitimacy to software and IoT auditing.  This can be equivalent to the certifications from ISO, PCI, and others where an external auditing organization validates that the manufacturers are following specific criteria.

Training

Training and practice will bring any organization up to a higher level of security standard.  Breeches and phishing are most often due to lack of understanding by end users, and IoT security falls along those same lines.  Many organizations implement cyber security training in their security strategy so that their user base can provide the necessary security to their assets and information.

This same type of training should be widely available and provided to people worldwide, so that they can be better prepared to respond to security issues that may arise, and so that they will be more aware of the purpose and necessity for scrutiny, patching, and continued education.  We could have security awareness training built into basic computer courses in college and K-12 classes so that the security best practices can be taught from a young age and reinforced up through their college years.  There could even be government funded public service announcements to help educate the community abroad to the far-reaching effects of bad security practices.

Research has shown that in its current that users are poorly trained to understand these risks to their privacy and security.  Infosec Cloud states that 97% of people around the globe cannot identify a phishing email, and 74% of these same users would download malicious files due to their lack of training.  Now these don’t directly relate to IoT security, but it does coincide with the lack of basic security training amongst users. (Infosec Cloud, 2016)

Legislation

At times, legislation is the only way to create a greater environment for the common good.  Without regulations and stipulations behind them, some companies are unable to justify the costs of securing their implementations and products.  This provides these companies with justification to fund basic security and provides them with guidelines to meet so that there is a standard level of security.  European legislators are pushing to regulate this new breed of devices so that manufacturers are required to pass a security assessment which allows them to achieve a security accreditation that users can recognize, like the UL or FCC electronics industry markings.  This would provide consumers with a level of trust with the products that they are providing in that they are using secure software, and providing secure communication and updates for these devices through the lifetime of the product. (Krebs, Europe to Push New Security Rules Amid IoT Mess, 2016)

As part of his last year in term, President Barack Obama commissioned a report as to what President Donald Trump should tackle as part of his Cybersecurity strategy.  IoT is one of the top issues as part of this report. (Krebs, DDoS, IoT Top Cybersecurity Priorities for 45th President, 2016)  Aside from these few legislative activities, there has been little traction from governments abroad to provide legislation to force manufacturers to meet a specific security standard and to continuously patch their products through their life cycle.

The FTC is driving this effort by rewarding the public with $25,000 for solutions to automatic IoT patching. (Thibodeau, 2017)  The open source crowd has proven multiple times that it can come up with inventive and influential ways of solving problems such as IoT security.  With this incentive to provide the ability to automate patching and other standards from a government agency (or even a privatized business would suffice), this should give the monetary backing to individuals to create new solutions to the problem.

Conclusion

The future for the IoT industry is bright and full of wondrous opportunities.  Healthcare monitoring that can occur constantly and in a minimally intrusive manner while updating caregivers and doctors alike is a phenomenal improvement for their quality of life.  Farming communities can improve their resource utilization, reduce the amount of pesticides to only the necessary, increase water efficiencies, and provide for a more efficient, cost effective, and higher yielding crop.  This gives a bright outlook for our world where many countries are suffering from a lack of food.  Assembly line and manufacturing supply chains can be monitored to provide efficiencies, cost reduction, and increased production uptime to reduce waste in our manufacturing process and provide just enough product for the market’s needs.  With all of these possibilities and the guidance of knowledgeable security minded individuals leading this innovation we will be able to achieve the next level of great innovation since the creation of the Internet itself.

References

Arbor Networks. (2016, October 10). ComputerWeekly.com. Retrieved from The Connection Between IoT and DDoS Attacks: http://docs.media.bitpipe.com/io_13x/io_132434/item_1434568/ArborNetworks_CW_IO%23132434_Eguide_101016_LI%231434568.pdf

Banker, S. (2015, May 25). Using IT as a Competitive Weapon: Dow Chemical, PepsiCo, and the Internet of Things. Retrieved from Logistics Viewpoints: https://logisticsviewpoints.com/2015/05/25/using-it-as-a-competitive-weapon-dow-chemical-pepsico-and-the-internet-of-things-2/

Black, R. (2016). Protecting smart devices and applications throughout the IoT ecosystem.

Fisher, D. (2016, June 3). FTC Warns of Security and Privacy Risks in IoT Devices. Retrieved from Onthewire.io: https://www.onthewire.io/ftc-warns-of-security-and-privacy-risks-in-iot-devices/

Infosec Cloud. (2016, January 4). Security Awareness Training – The Numbers. Retrieved from Infosec-cloud.com: http://www.infosec-cloud.com/security-awareness-training-the-numbers/

Koriat, O. (2017, January 23). A Whale of a Tale: HummingBad Returns. Retrieved from Checkpoint.com: http://blog.checkpoint.com/2017/01/23/hummingbad-returns/

Kranz, M. (2016, November 21). Building the Internet of Things: A How-To Book on IoT. Retrieved from blogs.cisco.com: http://blogs.cisco.com/digital/building-the-iot

Krebs, B. (2016, December 16). DDoS, IoT Top Cybersecurity Priorities for 45th President. Retrieved from KrebsonSecurity.com: https://krebsonsecurity.com/2016/12/ddos-iot-top-cybersecurity-priorities-for-45th-president/

Krebs, B. (2016, October 16). Europe to Push New Security Rules Amid IoT Mess. Retrieved from KrebsonSecurity.com: http://krebsonsecurity.com/2016/10/europe-to-push-new-security-rules-amid-iot-mess/

Krebs, B. (2016, February 16). IoT Reality: Smart Devices, Dumb Defaults. Retrieved from KrebsonSecurity.com: http://krebsonsecurity.com/2016/02/iot-reality-smart-devices-dumb-defaults/

Krebs, B. (2016, October 16). Source Code for IoT Botnet ‘Mirai’ Released. Retrieved from KrebsonSecurity.com: http://krebsonsecurity.com/2016/10/source-code-for-iot-botnet-mirai-released/

NCTA. (2014, May 5). Infographic: The Growth of the Internet of Things. Retrieved from NCTA.com: https://www.ncta.com/platform/industry-news/infographic-the-growth-of-the-internet-of-things/

O’Marah, K. (2015, August 14). The Internet of Things Will Make Manufacturing Smarter. Retrieved from IndustryWeek: http://www.industryweek.com/manufacturing-smarter

OWASP. (2017, February 17). IoT Security Guidance. Retrieved from OWASP.org: https://www.owasp.org/index.php/IoT_Security_Guidance

Prieto, R. (2016, June 7). Cisco Visual Networking Index Predicts Near-Tripling of IP Traffic by 2020. Retrieved from newsroom.Cisco.com: https://newsroom.cisco.com/press-release-content?articleId=1771211

Rouffineau, T. (2016, December 16). Research: Consumers are terrible at updating their connected devices. Retrieved from Insights.ubuntu.com: https://insights.ubuntu.com/2016/12/15/research-consumers-are-terrible-at-updating-their-connected-devices/

Scorxton, A. (2015, April 9). Startup Insulin Angel uses internet of things to help diabetics. Retrieved from Computerweekly.com: http://www.computerweekly.com/news/4500244001/Startup-Insulin-Angel-uses-internet-of-things-to-help-diabetics

Segal, E. C. (2016, October 11). Exploitation of IoT devices for Launching Mass-Scale Attack Campaigns. Retrieved from Akamai.com: https://www.akamai.com/us/en/multimedia/documents/state-of-the-internet/sshowdown-exploitation-of-iot-devices-for-launching-mass-scale-attack-campaigns.pdf

Spring, T. (2016, October 21). Dyn Confirms DDoS Attack Affecting Twitter, Github, Many Others. Retrieved from Threatpost.com: https://threatpost.com/dyn-confirms-ddos-attack-affecting-twitter-github-many-others/121438/

Thibodeau, P. (2017, January 4). FTC sets $25,000 prize for automatic IoT patching. Retrieved from ComputerWorld.com: http://www.computerworld.com/article/3154348/security/ftc-sets-25-000-prize-for-automatic-iot-patching.html

TrapX Security. (n.d.). Anatomy of the Attack: Zombie Zero. Retrieved from Trapx.com: http://deceive.trapx.com/rs/trapxcompany/images/AOA_Report_TrapX_AnatomyOfAttack-ZombieZero.pdf

York, K. (2016, October 22). Dyn Statement on 10/21/2016 DDoS Attack. Retrieved from Dyn.com: http://dyn.com/blog/dyn-statement-on-10212016-ddos-attack/

 

Dyn DDoS Outage Map

Figure 1.

Level3 live outage map on Friday 5:20PM EDT during the Dyn DDoS on October 21st – (Spring, 2016)

The Growth of the Internet of Things

Figure 2.

NCTA Infographic representing the past and expected growth of the Internet of Things (IoT) – (NCTA, 2014)

 

IOS vs IOS XE

Standard

Legacy Cisco IOS

Cisco IOS (Internetwork Operating System) was Unix-based and originally designed in 1984 to implement routing functionality on Cisco hardware.  It was based upon a monolithic architecture which meant that any processes ran by the OS were stacked and interrelated.  This had two main issues with it:

  1. Memory was shared across all processes.  This meant that any process could modify or even  corrupt the memory of another process, causing more than just an issue with the process which initiated the issue.
  2. Run-to-completion scheduler.  All calls must go through the kernel, which will not interrupt any existing processes that have the CPU.  These processes must report back to the kernel when they are done before another process is allocated to the CPU.

Forwarding (Data) and Control plane functionality were thus combined into a single failure domain.  If an SNMP bug caused a buffer-overwrite condition which happened to overwrite the EIGRP process memory allocation, you are now affecting not only affecting the control plane, but also your forwarding plane, thus crashing the router and causing an outage.  In this same use case, if the SNMP process is being walked by a monitoring server while the EIGRP process needs to update the RIB, this process could be slowed down dramatically until the process finishes.

The original OS design worked without too many issues, as there weren’t many platforms that Cisco supported.  Each IOS version had to be written specifically for each platform to support the drivers and features necessary for the platform to function.  As Cisco acquired more companies, and customers requested additional features and functionality, maintaining the plethora of IOS trains became unmanageable.

Flexibility to deploy a new feature to market also became a problem.  Any new feature required a revision of the entire IOS binary rather than just making it an install-able package that utilized the underlying kernel for functionality.

 

IOS XE Features and Improvements

Cisco’s IOS XE resolves these issues from its predecessor through multiple changes.  The first main hurdle was to create a base OS image which would not only provide the capability of splitting up the forwarding and control plane, but it would also provide a standards based approach for installing any increased functionality in a timely fashion in the future.  Linux was determined to be the underlying platform of choice to create this capability.  Linux is now the underlying common infrastructure which makes the previous IOS features just software addons to the underlying OS.  Drivers could now be written with knowledgeable programmers from the Linux community which decreased hardware release schedules.  IOS was split up into the following packages:

  1. RPBase – Provides the operating system software for the route processor
  2. RPControl – Provides the control-plane processes that interface between Cisco IOS Software and the rest of the platform
  3. RPIOS – Provides the Cisco IOS Software kernel, which is where Cisco IOS Software features are stored and run; each consolidated image variant has a different RPIOS sub-package. This has recently come back to a single image standard, with licensing controlling the features available and not the IOS image in this package.
  4. RPAccess – Provides components to manage enhanced router access functionality (SSH, SNMP, HTTP as examples)
  5. SIPBase – Share interface processor (SIP) carrier card operating system and control processes
  6. SIPSPA – Provides the shared port adapter (SPA) driver and associated field-programmable device (FPD) images
  7. ESPBase – Provides the ESP operating system and control processes for the ESP software

Each of these can be restarted and upgraded as needed without causing the data plane to restart.  IOS XE contains two functions which finalize the separation of the control and data plane.  The Forwarding and Feature Manager (FFM) provides an API to the control plane processes and translates those instructions into meaningful changes in the control plane.  The FFM then utilizes the Forwarding Engine Driver (FED) to update the data plane to reflect the desired changes from the original API call to the FFM.

Application and process state have also been moved from memory to a database structure.  This allows for the process data to be synchronized more easily across processes since memory does not have to be shared nor do inter-process calls need to be executed.  Each process requiring the information can call to the centralized database and see the information stored within.  If a process must restart, it can be done so without losing the state of the crashed process.  The information can be reintroduced after the restart, thus increasing the high availability of the system.

Cisco Live 2018 – IOS XE Architecture for Programmability – BRKSDN-2666 – Jeff McLaughlin

IOS XE also provides the capability of running hosted applications in either an LXC or VM configuration.   Since IOS XE is just a Linux machine with software packages running on top of it, any Linux supported application can be implemented with little development effort.

Programmability is also improved upon with this structure.  With each feature set being implemented as its own process, and a database storing its state in a standardized format, API calls can be created for readily update configurations and report on the status of a process through standards based data exporting functionalities.

I’m Still Alive

Image

isc2_cissp2      ccnp_routingswitching_large      ccdp_design_large

Been busy recently working on certifications and finishing up my bachelors.  Here’s what I’ve accomplished in the past year and a half!  One more year and I’ll have my bachelors complete!