Developing a cloud strategy? Better include a private cloud

Over the past year, I’ve noticed a significant shift in my conversations about cloud with senior IT managers.

A year ago, when discussing an organization’s cloud strategy, I heard a consistent theme that “our focus is on creating a private cloud.” Sometimes stated, sometimes unstated or sometimes said under an executive’s breath was the objective of curtailing developer use of public cloud computing. The target of that objective most commonly was Amazon Web Services.

I always felt that the characterization that attributed the “public cloud problem” to rogue developers was misplaced. It is true that many developers embraced Amazon Web Services for its easy resource availability and low cost — a.k.a. its “agility.” However, to presume that the issue is caused by individuals offered (or coerced into using) a private cloud is to fundamentally misunderstand the phenomenon of “shadow IT,” as it is sometimes pejoratively called.

Don’t Blame the Developers

Something much more profound than developer experimentation is behind the wholesale adoption of public cloud computing. While developers have flocked to Amazon Web Services and its counterparts, generally speaking, they are not doing it without organizational support. Most developers are embedded within groups that answer to business units, and these groups are responsible for ensuring that the business side of the house has the applications they need to support their objectives.

The real dynamic of public cloud adoption isn’t headstrong software engineers covertly conducting shadow IT on their own time. The real dynamic is the endorsement and sponsorship of software engineers’ use of a public cloud on the part of the application group, for whom those engineers work. And this makes sense, doesn’t it?

A developer can easily run up a bill of $500 per month using Amazon Web Services — do you think he or she is going to absorb that cost just because using that public cloud service makes development more efficient? Of course not. Those fees are reimbursed by their organization, if they’re not paid directly. In other words, the executives within those organizations know and approve of developers using public clouds.

Business Has Its Head in the (Public) Cloud

The fundamental truth underlying the explosive growth of public cloud computing is that it is fueled by development decisions driven by the sponsoring business units. Business units are under pressure to produce financial results, and, as the saying goes, time is money.

Compared to the traditional provisioning lifecycle, public cloud computing dramatically reduces resource availability timeframes. Given this dramatic contrast, business units have received green light to authorize their developers to use public cloud computing.

The result is obvious. Central IT has been presented with a fait accompli. Significant applications have been developed in public cloud environments and the sponsoring organizations are unwilling to return to the established IT infrastructure arrangements.

Over the past year, it has become clear to IT management that this public cloud computing “fling” has become a serious commitment. Apps are now in production and cannot be disrupted by transferring them to an internal cloud. Moreover, business units are impressed with what they’re offered by public clouds — no lengthy lead times for resource availability, no need for upfront capital investment, and the list goes on.

Consequently, it has become increasingly clear that public cloud use is going to be a significant part of every company’s computing strategy. While many (if not most) companies will implement an internal private cloud, every company will need to incorporate public cloud computing into its operating environment.

CIOs Must Accept That Public Cloud Isn’t Drifting Away

As I noted at the beginning of this piece, this fact has led to a significant shift in IT cloud computing strategy. A year ago, most CIOs accepted public cloud computing, but their internal assumption was that eventually the dalliance would end with a return to centrally hosted facilities. Given that expectation, experimentation with Amazon Web Services was tolerated as a temporary aberration, but only until the internal cloud was ready.

Today, I’m seeing more and more senior IT executives recognize that the assumption that simply creating a private cloud would extend the traditional, wholly-owned-and-operated infrastructure into the cloud era is unworkable. The reality is that every IT organization is going to have an “and” strategy: Infrastructure will be a mix of private and public cloud computing. For most, that will mean some mix of private resources and Amazon Web Services.

This, of course, raises all kinds of challenges. For one, most internal IT organizations rely heavily on VMware virtualization. Amazon Web Services uses a customized Xen virtualization layer. While many cloud providers offer VMware-based solutions aimed at supporting a common public and private infrastructure, most analysts I’ve heard from argue that the uptake of VMware-based public cloud computing lags behind Amazon Web Services.

More crucially, most of the VMware-based public cloud providers are not targeted at application development, which makes them less satisfactory for business-unit purposes, since most decisions from the business units are based on individual application issues, rather than general infrastructure choices.

A second challenge grows out of that virtualization difference. If an organization’s vision is that applications should be able to be deployed in either a public or private cloud environment (and that should be the vision), how can the organization achieve that? While there are virtual image import products and services, this is not satisfactory as a long-term solution. Applications are long-lived and life-cycle management is crucial. Bit conversion of virtual machines is a one-time event, while application release is an ongoing process.

Clearly, a solution based on taking a VMware image and running it through a conversion process is inadequate. The solution must be capable of taking software components and creating an appropriate image for any target environment. The common approach of creating virtual machine templates does not support this solution.

A third challenge reflects the facts of life for business units. One of the main reasons to use cloud computing is to support the need to more rapidly update applications. As business initiatives increasingly move to online offerings, the need to modify applications quickly to reflect offering updates, campaigns, new partnerships and other initiatives becomes crucial. The pace of application versioning must be much quicker than in the past and must support deployment choice.

Quite a set of challenges, no? Next week, I’ll offer some guidelines for addressing them. As a sneak peek, don’t be surprised if those guidelines include the term “DevOps.”

Bernard Golden is the vice president of Enterprise Solutions for enStratus Networks, a cloud management software company. He is the author of three books on virtualization and cloud computing, including Virtualization for Dummies. Follow Bernard Golden on Twitter @bernardgolden. Follow everything from on Twitter @CIOonline

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada

Featured Download

Related Tech News

Featured Tech Jobs


CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.