Service-level agreements no longer enough

Longtime readers know I’m a fanatic about service-level agreements. I regularly advise clients about SLA best practices, negotiation and enforcement strategies. And we talk often about how to develop service-level management and monitoring infrastructure that ensures that carriers live up to their promises.

But all that’s old school in a world where service providers are essentially just bandwidth providers. What happens as providers move from bandwidth providers to application providers? That is, when the service provider isn’t just generating bits on a wire, but is delivering applications, storage and computing services from the cloud?

A couple of things change in this new scenario. First, SLAs necessarily evolve from simple infrastructure metrics (latency, jitter, packet loss) to application-level metrics (application availability, response time). Second, monitoring and measurement needs to be far more comprehensive.

Let’s say you’re relying on a hosting provider to deliver a key application. You should be able to track server availability, application performance, storage availability and network performance — not just router uptime. And if the application is hosted on a virtualized server, this can be mighty complex.

Finally, there aren’t the same built-in upper limits with application services as with network services. If you purchase T1 access to MPLS, you’ll never use more than a T1’s worth of bandwidth. But if you purchase access to an application, users may consume more CPU cycles than anticipated — and service consumption (and costs) will skyrocket.

The upshot? As services evolve, SLA best practices need to change, too. A key component that emerges as part of SLA management is the notion of policy management and orchestration. Providers and their customers need to be able to manage and monitor a broad range of physical infrastructure, and seamlessly integrate that into a provisioning and billing system. They also need the ability to perform trend analysis and predictive modeling, to anticipate surges (or decreases) in demand.

I’m intrigued by a service offering being rolled out by BT Innovate (the arm of the British Telco that includes the research labs, among other things). Called Total ICT Orchestration, the management solution provides dynamic allocation of end-to-end network and IT resources based on SLAs (and ultimately, business priorities). It also will include a policy manager, which includes a master control system that connects all the resource objects and provides operational umbrella over the top. The service works in conjunction with BT’s managed network, storage and virtualized computing offerings — essentially enabling the carrier to provision, manage and deliver an application end-to-end.

Not all of this is new — plenty of providers are moving to a cloud computing model (storage in the cloud, computing in the cloud, applications in the cloud). What’s unique about BT’s approach — so far as I can tell — is that it focuses on a part of the problem that most services don’t: the provisioning, management and policy. As IT departments move increasingly towards software-as-a-service, this is something they’ll have to keep in mind.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.