Exploring OpenStack

Let me start off by saying that February has, without a doubt, been a month from hell. Technically, it started at the end of January, but in reality it all began in December. I just didn’t know it at the time.

Unbeknownst to us, one of our clients was having problems on their end and kept sending us repetitive web service calls. While this isn’t normally an issue, due to the nature of the requests and their payload, it led to some problems. I won’t get into the nitty gritty but suffice to say we eventually reached a tipping point and endured a two-hour outage…something that never happens. We pride ourselves on truly being available 24×7 and have built our entire infrastructure and process around that philosophy.

The past month has been spent working with countless people on various teams to reproduce the issue. It was a hard battle but we finally identified a number of issues which taken by themselves weren’t of much consequence but when coupled together were enough to reveal our Achille’s heal.

So this leads me to further desire and embrace an idea that’s been bouncing around my head for some time now. I dream of the day where we have a truly self-aware, self-healing infrastructure. I dream of system that can monitor requests per interval and average response times during peak periods and dynamically spawn new VMs to help manage the load. I dream of a system which can identify when a VM is struggling and can proactively take action to kill it off and re-spawn a fresh engine.

This is the impetus behind my interest in OpenStack.

What Is OpenStack?

Probably the best definition I’ve found describes OpenStack as “a cloud ecosystem that controls large pools of storage, compute, and networking resources.”1 It goes on to further state that OpenStack “automatically creates compute resources without human intervention.” Exactly what I’m dreaming of.

OpenStack is much more than just those three areas, though. There are components for identity services, image services (to manage VM images for example), monitoring, orchestration, and more. It is a rather large ecosystem and one which will take considerable effort to get our organization to buy into.

The Ultimate Goal

Our deployments are measured in months. Some of this is simply due to the nature of our dependence upon the source of record (i.e. mainframe) coding being completed. A great deal of it, however, simply derives from how we do business and our current practices. Even minor code changes have to go through an incredible process to see the light of day on a production server.

I hope to eventually streamline this. I’d love to be able to split our application into microservices (even if it’s done at the functional level) where I don’t have to deploy the entire application. If I’m only updating a handful of web services I shouldn’t have to do a full-blown upgrade with an entire regression test of the functionality that wasn’t touched.

Imagine the benefit to our customers if a deployment was as simple as clicking a button and pushing out the changes as opposed to updating the entire system. Working in the financial services industry, we often have to make unplanned changes due to regulatory mandates. How cool would it be if the IRS pushed a last-minute change in late December which had to be implemented before the end of the year and we could accomplish it by simply rolling out a new version of the affected service? Resources are routinely limited this time of year and it would make it so much easier than having to find a way to do a full regression.

I see a world where Docker images make up the bulk of the system and we use OpenShift with OpenStack to build this new infrastructure. I may be off in some of my thinking (don’t be too harsh, I’m still putting my thoughts together) but this is the world I dream of.

  1. Linux Academy, Deploy and Manage OpenStack on Ubuntu ↩︎

Post to Twitter

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.