The cost advantage controversy of cloud computing

One of the topics most associated with cloud computing is its cost advantages, or lack thereof. One way the topic gets discussed is “capex vs. opex,” a simple formulation, but one fraught with meaning.

At its simplest, capex vs. opex is how compute resource is paid for by the consumer of those resources. For example, if one uses Amazon Web Services, payment is made on a highly granular level for the use of the resources — either time (so much per server-hour) or consumption (so much per gigabyte of storage per month). The consumer does not, however, own the assets that deliver those resources. Amazon owns the server and the storage machinery.

From an accounting perspective, owning an asset is commonly considered a capital expenditure (thus the sobriquet capex). It requires payment for the entire asset and the cost becomes an entry on the company’s balance sheet, depreciated over some period of time.

By contrast, operating expenditure is a cost associated with operating the business over a short period, typically a year. All payments during this year count against the income statement and do not directly affect the balance sheet.

From an organizational perspective, the balance sheet is the bailiwick of the CFO, who typically screens all requests for asset expenditure very carefully, while operating expenditures are the province of business units, who are able to spend within their yearly budgets with greater freedom.

Summing this up, it means that running an application and paying for its compute resources on an “as-used” basis means the costs run through the operating budget (i.e., are operating expenditures — opex), while running the same application and using resources that have been purchased as an asset means the cost of the resources is a capital expenditure (capex), while the yearly depreciation becomes an operating expenditure.

It might seem obvious that the opex approach is more preferable — after all, just pay for what you use. By contrast, the capex approach means that a fixed depreciation fee is assigned no matter what use is made of the asset.

However, the comparison is made more complex by the fact that cloud service providers who charge on an as-used basis commonly add a profit to their costs. An internal IT group does not add a profit margin, so charges only what their costs add up to. Depending upon the use scenario of the individual application, paying a yearly depreciation fee may be more attractive than paying on a more granular basis. The logic of this can be seen in auto use — it’s commonly more economical to purchase a car for daily use in one’s own city, but far cheaper to rent a car for a one or two day remote business trip.

There is an enormous amount of controversy about whether the capex or opex approach to cloud computing is less expensive. We’ve seen this in our own business — at one meeting, when the topic of using AWS as a deployment platform was raised, an operations manager stated flatly “you don’t want to do that, after two years you’ve bought a server.” Notwithstanding his crude financial evaluation (clearly not accounting for other costs like power and labor), his perspective was opex vs. capex — that the cost of paying for resources on a granular basis would be more expensive than making an asset purchase and depreciating it.

The move to private clouds added to the complixity of this. Heretofore, most organizations worked on the basis of one application, one server, so the entire depreciation for the server was assigned to one application, making the calculation of how much the capex approach would cost relatively straightforward.

This became further complicated with the shift to virtualization, in which multiple applications shared one server. Now yearly depreciation needed to be apportioned among multiple applications — and this could be even more complex if one attempted to apportion the cost according to something other than assigning cost by dividing the cost by the number of VMs on the machine. Trying to assign cost on the percentage of total memory used by an application, or processor time requires instrumentation and more sophisticated accounting methods, so most organizations just work on a rough “X dollars, Y number of VMs, each one costs X divided by Y.”

Today, though, organizations using compute resources don’t want to pay a flat fee; after all, they may have transitory use, spinning up resources for a short-term test or a short-lived business initiative, why should they commit to a five-year depreciation schedule? Resource consumers expect to pay on an operating expenditure basis; after all, that’s what’s out there in the market. They want to pay only for what they use, no matter who the provider is.

IT organizations are intrepidly preparing for this world, implementing private clouds and moving toward granular pricing of resources, a task made difficult, it must be admitted, by the fact that most IT organizations do not have accounting systems designed to support detailed cost tracking.

So it will be the best of all worlds — resource consumers getting granular, use-based costing, IT organizations providing private cloud capability with support for sophisticated cost assignment, and no provider profit motive imposing additional fees beyond base costs.

Or will it?

Here’s the thing — for every opex user there is a capex investor. For every user who delights in only paying for the resources used, there must be a provider who stands ready to provide resources and offer them on an as-needed basis — someone must own assets.

For that asset holder, a key variable in offering prices is utilization — what percentage of total capacity is being used. To go back to that crude pricing formula, an example of cloud utilization is what percentage of a servers total available processing hours are sold. The crucial factor is to sell sufficient hours — i.e., generate sufficient utilization — to pay for the asset.

This means that IT organizations need to become much more sophisticated about managing load and shaping use. This is typical of any capital- intensive industry — think of airlines and the sophisticated yield management measures they implement.

I have heard some people assert that utilization won’t be much of a problem because most applications are not very volatile; that is, their resource use doesn’t vary much. Therefore, high utilization rates can be achieved in private clouds by building a cloud to support typical use plus some spare capacity to support occasional spikes in demand.

I think this misreads likely experience and extrapolates the past inappropriately. This belief underestimates outcomes as application groups absorb the capability of cloud computing. For one, now that highly variable loads can be supported, application groups will begin creating more of these type of applications; heretofore, because it was extremely difficult to get sufficient resources for these type of applications, people didnt even bother thinking about them. Now that a highly variable load application is possible, people will start developing them.

A second way this perspective underestimates future outcomes is that it fails to understand behavior changes as organizations learn that they can reduce costs by squeezing application capacity use during low-demand periods. James Staten of Forrester characterizes this as “down and off,” meaning cloud computing cost is reduced as ways to scale applications down or turn resources off. This cost reduction benefits uses, but causes problems for providers.

Finally, the perspective that the cloud will be like past infrastructure use — mostly stable and low growth — fails to understand how price elasticity will affect demand. If cloud is cheaper, people will use more of it. This is why we at HyperStratus predict a coming explosion of applications. Again, this will affect utilization and capacity planning (I discussed the challenge of capacity pl

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.