Private clouds: more than just buzzword bingo

A friend pointed me to a blog in which Ronald Schmelzer, an analyst at ZapThink, asserts that the term “private cloud” is nothing more than empty marketing hype. Ironically, he proposes that we instead use the term “service-oriented cloud computing.” Maybe I’m being obtuse, but “service-oriented” anything is about the most buzzladen term I’ve heard in the last five years. Seriously, have you read the SOA article on Wikipedia? It’s over 5,000 words long, chock-a-block full of the “principles of service-orientation” like “autonomy” and “composability”. What a joke!

Let me see how many words I need to define private clouds. It’s a centralized infrastructure supplied by a single organization’s IT department that provides virtualized compute resources on demand to users within that organization. Let’s see, that’s… 21 words. Not bad, but I bet if you’re like me, you’re probably looking at that and thinking that it still doesn’t make much sense, so let me give you a concrete example.

I develop cluster-based software. Testing requires at least three computers, sometimes more. Oh, and we develop simultaneously for seven x86-based platforms, counting all the variants of Windows and Linux that we support. In the bad old days, we had a server room crammed full of rack-mounted computers, each hosting one or two operating systems. This approach had lots of problems:

  • There was never enough to go around: even with over a hundred systems (for just 8 employees!), we were constantly fighting each other for resources, especially for the precious few multi-core systems we had.
  • It was inefficient: one dev might reserve a set of machines for stability or performance testing and leave them tied up for days on end. Often they were not literally using the hardware 24/7, but there was no safe way for anybody else to exploit the idle cycles.
  • It was limited and inflexible: even with dual-boot systems we didn’t have enough to really represent the full range of operating systems we wanted to support, and honestly, although dual-boot works great on your desktop, it is a major pain when applied to a cluster of computers in a locked server room at the other end of the building.
  • It was wild and hairy: even if you got the number and type of systems you needed, you couldn’t be sure the last guy to use them had left them in a usable state.

Sure, we could have “solved” this problem by throwing money at it: a bigger server room, more computers, a new A/C system to keep all those computers cool (yes, that was really a stumbling block for us for a long time), and of course, time to install and configure the extra systems. What a mess!

Today, we’ve replaced that pile of computers with a private cloud: a VMWare LabManager instance managed by our IT department. When I want to run a test, I use LabManager to provision a cluster containing as many computers as I want, running whatever OS I choose. With a few clicks of the mouse, I have a cluster of virtual machines at my disposal, booted and initialized to a known good state. I can grow the cluster as needed, or create a second cluster to run tests of a different feature or on a different OS. Other devs and QA can simultaneously deploy their own clusters without worrying about stepping on my toes. When I’m done, I click another button and my virtual test cluster vanishes without a trace, leaving no wasted resources or idle computers behind.

But of course, these benefits — scalability, flexibility, reusability, reproducibility, etc — are the hallmark of cloud computing in general. What’s special about private clouds then? For me it comes down to one thing: bandwidth. As you know, getting your data into and out of a public cloud is a major problem, with no clear solution. But getting my data into and out of a private cloud on my high-speed corporate LAN is trivial. It’s that simple. For our needs, the public cloud just doesn’t quite cut it, because we have too much data (in the form of VM images and test data) to be schlepping back and forth across the Internet. Moving the cloud inside the corporate network solves that problem. As an added bonus, we don’t have to worry about whether or intellectual property is secure on some external corporation’s servers. Our stuff never leaves our network.

I think where Ronald and his pundit pals go wrong is in thinking about cloud computing as being only for providing services and applications from your company to your customers. As a developer, I can tell you that cloud computing also addresses a real need by supporting development and testing activities within a company, and private clouds make that technology practical for users with high bandwidth requirements.

Follow me

Eric Melski

Eric Melski was part of the team that founded Electric Cloud and is now Chief Architect. Before Electric Cloud, he was a Software Engineer at Scriptics, Inc. and Interwoven. He holds a BS in Computer Science from the University of Wisconsin. Eric also writes about software development at http://blog.melski.net/.
Follow me

Share this:

One response to “Private clouds: more than just buzzword bingo”

  1. Zap-ZapThink says:

    Ronald Schmelzer is a pot calling the kettle black. He and his ZapThink partner are STILL selling the SOA snake oil because they can’t just flip their SOA community over to cloud as quickly as they want to.

    So, because cloud is the only threat to SOA, we can count on ZapThink to protect the fringes of their fiefdom.

    And that means they will have to take a shot at private clouds, because they any server asset behind the firewall is…mine! mine! mine! (read SOA)

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe

Subscribe via RSS
Click here to subscribe to the Electric Cloud Blog via RSS

Subscribe to Blog via Email
Enter your email address to subscribe to this blog and receive notifications of new posts by email.