OpenStack for Everyone

I work for Red Hat managing a team of Solutions Architects in our Public Sector organization.  As part of that job, I give presentations, I watch presentations, and I read a lot of presentations.   Part of my job is also translating techno-babble into plain English for a variety of folks — customers, sales reps and others. I feel like I have a pretty good idea of what an OpenStack presentation looks like.

 

It starts off something like this:

And is followed up by:

OpenStack is a Cloud Operating System!

I have no idea what that means.  I have an idea of what people are trying to convey when they say that, but it just doesn’t…  Its one of those scenarios where you hear someone say something and you think to yourself “I don’t think you understand what that means”, at least in a literal sense.

Then you’ll see that:

OpenStack is an Open Source VMWare Killer!

Which I get, but the reality is that OpenStack eclipses so dramatically what VMWare does, or did, that it’s kind of like saying “the guy with the gun is going to win against the guy with a knife”. Well yeah.  Of course he is.  It was never a competition.

And then of course, you’ll get the standard:

Who is going to be the Red Hat of OpenStack?

 Which as a Red Hatter, that’s pretty flattering.  And I do think we’ll be the Red Hat of OpenStack, but that’s not really germane.  I think people throw that in just because they’re not sure what else to talk about…

Somewhere in there, you’ll see a slide like this one:

http://ken.pepple.info/openstack/2012/09/25/openstack-folsom-architecture/

And then a lot of jibber-jabber about “Horizon, blah blah, Nova, blah, Keystone, blah – Glance, Neutron blah blah blah”.  You may also hear someone try to describe “cloud ready” workloads as cattle, and your exchange server as pets.

Good ole pets & cattle.

Yeah, ok.  What does that even mean?

So confusing.  And to the lay person, the guy who isn’t living and breathing OpenStack every day, there’s very little actually communicated.  They’re great presentations for OpenStack conferences, but I feel like the basics are glossed over.

So here it is – the basics.

Let’s start with datacenters.  Datacenters are massive facilities with redundant power, loads of cooling, some security guards, and lots of nerds running around getting things done.  Big things.  Everyone has datacenters: Google, Apple, Facebook, Amazon, Microsoft. Heck, the US Army estimates they have about 1000 datacenters.

But they all have three things in common: servers, networks and disks.  Or, as the cool kids say, compute, network and storage.

Getting Virtual

Generally speaking, when people talk about virtualization, they are talking about virtualizing servers, aka compute.   There are lots of products out there that handle server virtualization, so many in fact that its often said that virtualization has been, or soon will be commoditized.

Server virtualization is a simple yet powerful idea:  you take underutilized servers, servers which are running at 10-20% of their capacity, and you virtualize them.  Then you put those virtualized servers onto one or more actual servers.  The idea is that you’ll get to 75-100% utilization, getting the most out of your server investment.

But the reality is somewhere in the middle. Virtualized servers are often built as clones of their physical servers.  They often have too many compute resources assigned to them, and too much disk and memory as well.  We get VM Sprawl, and the efficiencies are not nearly what we hoped they were.  All in all consolidation around 6:1 seems to be the norm.

http://www.computerworld.com/article/2550584/data-center/virtualization–beware-of-server-overload.html

The benefits are vast in that servers can be built in minutes instead of days or weeks.  They can be created, destroyed and cloned with the click of a button.  With things like Live Migration, I can move a running operating system from one piece of hardware to another.  This is all good stuff.

Server Virtualization != Data Center Virtualization

So, you virtualized your servers.  You created an Infrastructure as a Service.  You’re feeling pretty proud of yourself…until you need to get some more storage.  You still call down to the storage team and tell them you need disks. That takes hours, days, weeks.  Same for network capacity.  If you need a new LAN segment, load balancer or — heaven forbid — a firewall, you find yourself waiting days or weeks while that all gets sorted.

Open Stack helps us virtualize the storage & network resources in addition to the compute. Vendors are getting on board with this quickly. They realize that if they don’t build OpenStack interfaces for their technology, they’ll be left behind.  OpenStack is defining the model for next generation Datacenters.  You’ll still have network teams and server teams and storage teams, but resources will no longer be allocated in such a serial manner. And when resources are allocated, they’ll be done on demand.

That’s where OpenStack really shines

In and of itself, simply virtualizing compute, network & storage is exciting, but that’s not the whole story.

OpenStack provides a common API for managing those compute, network & storage resources.  Which means that if you write an application that’s OpenStack aware, that application will be able to call up more storage, create networks or bring in more servers programmatically.

Alternatively, OpenStack can marshal those resources on demand for your application, even if that application isn’t OpenStack aware.  It’s a programmable, self scaling infrastructure. That’s pretty exciting.

It changes how we think about deploying applications

Every good Systems Administrator knows what this is:

Fig 1

Basic Architecture - New PageFor those who don’t, It’s a crude drawing of an application infrastructure with some sort of presentation layer, a logic layer and a database.  It’s got a load balancer and redundancy, so that if we lose one of the app or web servers, we can still serve our customers.  Its one of the most basic architectures, and there’s something similar running in just about every datacenter.

It’s a simple, but effective way of building infrastructure.  It’s also expensive.  You never build just one of these; you’ve got one for Dev, one for Test, one for Production.  Then you build one per application, or maybe one per group of applications.  Before you know it, you’ve got half a dozen web servers, a whole team of application servers and lots of database servers.  It is the genesis of server sprawl, but it serves a purpose.  Its highly available, you can scale it out by  just adding more app, web or database resources.

But what if we just build one…

Nobody wants to be the guy to recommend our next architecture.  It is clearly flawed as it has zero redundancy.  And how will we ever scale?  I assume we can add more resources, but shouldn’t you build that from the outset?

Fig 2

stove pipe - New PageNot anymore. With OpenStack, you deploy this.  OpenStack manages the load balancer, OpenStack manages the servers, and OpenStack manages the storage.  If OpenStack sees that your web server is getting overloaded, it can spin up another web server, update the load balancer configuration, and start routing traffic appropriately.  The same goes for your app server, and eventually, for your data as well.

The Sahara Project (there we go with those fancy names again) provides Data Processing (aka Hadoop) as a service, while Trove provides a Database as a Service layer – with scalable and reliable database provisioning built into the OpenStack infrastructure.

What this means is that your infrastructure, when managed by OpenStack, can go from figure 2 to figure 1 and back again, simply based on customer demand and utilization.  That’s the beauty of the Heat Orchestration Templating system that’s built into OpenStack.

No phone calls – no tickets, no waiting.  You now have a  truly virtualized infrastructure responding to the needs of applications and the demands of customers.

Next up, IaaS: The more things change, the more they stay the same.