IT Automation – The Good, the Bad, and the Awesome?

Standard

How many times have you been faced with the same boring process of onboarding a new user, or building a server that was just like the last hundred that you built? And how did you feel the last time you built a script that does a repetitive task for you and saved you time? Both of these you would think are a dichotomy between long boring repetitive tasks and drudgery to the satisfying and pleasing knowledge that you now have more time on your hands for other things. I suggest these are just two points on the same path. One is recognizing a problem and the other is the resolution. Automation, when used properly, is an efficient and cost-effective solution to tasks that are repetitive and ongoing.

Automation has gotten a bad rap in recent years. Some people believe it as a tool to start the outsourcing procedure. Others think it will decrease personnel knowledge on how to perform these tasks themselves. These can be valid points at times, but I can bet that almost all of us at some point in our careers has found a process that we found painful, slow, error prone, and repetitive and turned it into a scripted process that saved time, money, and especially our own sanity.

Our time is precious in our chosen careers and we are constantly being asked to do more with less. When I first heard that idea I thought “How could you possibly do more with less?” So, being the young naive person that I was at the time, I just worked longer hours. A few years after I started down that path, I hit a wall. That wall was a complete dissatisfaction in my job and where I felt I was going. That’s when I started working on automation. They were just simple scripts at first, but these scripts allowed me to cut my time to deploy infrastructure in half and make my work more consistent.

One of these scripts was a simple PHP script which took variables I gave it, and returned a drop in configuration that followed the standards for each of our devices and regions. I would input things such as device name, region in which the device was to be deployed, interfaces and their IP configuration, and more. From that it would give me a text file in which it had the standards applied to each interface, global configuration option in which were our standards

Another script that simplified my life was our firewall rule automation script. This two part script would first be run during the day with all of the firewalls that had changes. It would verify that the compile of the rule base would complete without any errors, and it would then schedule the second part of the script to automatically push the rules at a specified time during that following night. It would email out results about the policy push to our network operations center, so that if there were any errors they could begin troubleshooting procedures and make any possible calls outs. This saved our engineers an hour each night for policy installations.

Each business and professional has different needs for automation. Some of these include, but are not limited to the following:

  • Repeatable Outcomes
  • Process Standardization
  • Faster Delivery
  • Increase Customer Satisfaction
  • Reduce Errors
  • Reduce Costs
  • Increase Employee Availability

Now, not everything can be automated. Automation takes time for the process discovery, knowledge on how to best perform the task, and the skills to program. Before you can automate a task, it needs to be a clearly defined framework. I don’t mean at a high level. It needs to be defined at each and every step what is expected to be done and the possible outcomes. For example, let’s go back to my new user example. Every new hire needs certain things; a laptop/desktop, network logon, access to all the appropriate systems, an email account, remote access VPN, etc. This is a clearly defined process that can be automated. Automating this allows a user to be functional within a day to possibly a few hours, with most of the work being paper trail for auditing purposes.

Cost is the biggest reason for automating and should be the first consideration when you are looking into it. The benefits need to outweigh the costs of the implementation and maintenance of these processes. Maintenance can be costly if you are constantly making changes to your processes, because it requires keeping people who are skilled enough to understand the process and how to program it so that it functions properly. On top of that, automation requires a higher level of quality assurance testing to be done to assure that the outcomes are what you’re expecting. Remember that when you automate a task, most people are expecting as good as or better than their human counter parts.

Do I think automation is awesome? Do I think automation is the miracle cure for IT woes? Yes, No, and not really. It depends on the situation, as some automation can just add complexity if not done properly. But with proper understanding and investigation as to what you can automate in your infrastructure, you can help your company save money, respond with agility to changing markets, and improve your own work experience. Doing so will allow you to expand your horizons to the new and exciting things that are coming out in IT every day.

Open Compute Project

Standard

While reading an article from Wired, where they were interviewing Facebook and their development of their new open source switches running on smaller, simpler hardware, I came across a project that I found rather interesting. It’s called the Open Compute Project. Their main mission is to create a forum where innovation and collaboration can occur between different organizations across the world to increase the performance, scalability, and ease of deployment for multiple system types across the IT estate.

Historically, from my perspective as I’ve gone through IT, the method of deployment is too cumbersome. There are too many groups involved that have different request processes and too many dependencies where resources can become tied up in trying to deploy systems and services. On top of that with many of the different platforms out there, many of them require specific training and are cumbersome to configure and understand. Also the adding costs of infrastructure is reducing the ability of IT departments to scale at the rate they need. Seeing the types of equipment that members of the OCP are developing, such as Facebook, Google, and Amazon, is encouraging. They are developing simple platforms that allow for high bandwidth and the ability for tools that currently manage server infrastructure to also manage the networking infrastructure. Many tools are out there to manage large scale deployments of Linux servers as well as automate changes from templates that are created by each respective IT departments. This is definitely something of interest for many companies, as the time to deployment right now is too long and too costly. Granted, with a well managed virtual machine environment and a network that is designed properly from the ground up you can get around the time consumption, but it is all still very costly from a budgeting perspective. If you are more interesting in reading about what they have done, I’ve included links below.

Facebook Now Runs on Networking Gear Designed by Its Own Engineers
Facebook’s New Data Center Is Bad News for Cisco
Going With the Flow: Google’s Secret Switch to the Next Wave of Networking
Open Compute Project

Google Kubernetes and Docker

Standard

I’ve recently seen that Google has released it’s cloud computing platform in an open-source option called Kubernetes. Also as part of that they are embracing the application packaging technology, Docker. This definitely shows a lot of promise for companies that wish to run their own cloud in their data centers so that they can utilize the processing power available across all of their servers. I’m going to be reading up on this and how you can utilize this with industry audit standards like PCI and HIPAA. I’m also curious how storage access is accomplished with this.

Redesigning IT Security

Standard

As a member of a global security organization that was global only by name, I’ve seen the extreme downside to having a patchwork and dysfunctional security across all of the IT infrastructure. It takes forever to find problems and breaches, figure out what actually occurred, and then patching the holes and recovering from the issue. I’ve been reading through a few documents recently that cover alot of ideas around IT security historically and where it should be going.

Currently, security is always an after thought. It goes that way with anything in life for most people. They go down the road not planning for an emergency until it’s happened. Now how do you fix it? With the proliferation of cloud technologies such as Dropbox and Evernote (both of which I use personally) and the state of BYOD in the corporate world, there are many holes and opportunities for data loss and reputational damage from breaches. We’ve also got the external threats of phishing, social engineering, hacktivism, and nation-state attacks to worry about and protect ourselves from, yet stay within a sustainable budget so that we can continue being profitable as a company. To do this you have to start from the ground up planning your security. That way you can easily mitigate security risks by properly implementing applications and services within the IT infrastructure. Also, by planning in advance, you can properly design a network and server architecture that utilizes high density and cost effective solutions yet still provide the same level of security. There have been multiple times that I’ve been on a project and due to poor planning we’re being forced to basically setup an entirely new leg of the network populated with servers, SAN, the whole gambit. That creates complexity which increases implementation time, troubleshooting, and training/onboarding of new personnel.

So how should you begin planning your IT security framework? You first need to start with understanding yourself. What is your business about? What compliance and regulation do you need to be thinking of (HIPAA, PCI-DSS etc…)

Once you understand the business, you then need to work on your data classifications. Keeping the number of classifications low will help with the complexity side of things. Three to four is a good number. It could be Public, Internal, Restricted, or maybe Public, Internal, Confidential, Restricted. Whatever you choose and think is best for your organization.

Now that you have a good understanding of your business and data classifications, you now need to work on an understanding of your applications. How do they function with each other and what classification of data do they hold? Getting a firm grip on your applications interactions and functionality will help you in designing a network strategy that will scale well over time as well as include a server and SAN strategy that also scales. It will also help you understand what security gaps there might be that you can use compensating controls and technologies to cover.

Now its down to the designing of your IT infrastructure. Placing external facing applications away from internal facing applications is a great way to help limit unnecessary exposure to external risks. Making sure you are using your firewall and IPS technology effectively in proper network bottle necks will help you keep those costs down as well as simplify the infrastructure. Design your network to be modular. Make it where you can add modules to a central core routing infrastructure so that it scales well. Add strong authentication to all externally facing applications. Make sure that you are doing vulnerability and penetration testing on your environment and utilizing that data to update your IPS policies and patching strategies. Create a simple and efficient SEIM technology so that you can gather logs from critical parts of the infrastructure. Don’t just place SEIM infrastructure everywhere and pull logs into it from everything. You need to make sure that you have SEIM coverage allocated to a specific data classification. That goes for firewalls, IPS, WAF, etc… You need to create a security matrix which gives you the required security controls to consider for each data classification. Not all might be needed but it at least gives you a check list to go through when designing new modules in the network. Also making sure that your endpoint security is simple will help your personnel learn it quicker and actually utilize it, instead of trying to bypass it. My last manager taught me alot about make sure the users have a good experience with the security we put in place. If your own user base is trying to undermine your security posture by bypassing it, then you open yourself for greater risk.

And finally you need to make sure that the process is in place to effectively manage your environment. From HR policies for adding/modifying and removing users, to adding, modifying and decommissioning applications and services in your environment. This allows you to close up infrastructure and save costs and decrease complexity in your infrastructure.

Below is the document I liked the most. HP covered alot of the facets of IT security design and they covered it extremely well and concisely.

Rethinking your enterprise security – Critical priorities to consider – by HP