Amazon cloud outage was triggered by configuration error

Amazon has released a detailed postmortem and mea culpa about the partial outage of its cloud services platform last week and identified the culprit: A configuration error made during a network upgrade.

During this configuration change, a traffic shift “was executed incorrectly,” Amazon said, noting that traffic that should have gone to a primary network was routed to a lower capacity one instead. The error occurred at 12:47 p.m. on April 21 and led to a partial outage that lingered through last weekend.

The outage sent a number of prominent Web sites offline, including Quora, Foursquare and Reddit, and renewed an industry-wide debate over the maturity of cloud services.

Amazon posted updates, short and bulletin-like, throughout the outage, but what it offered in its postmortem is entirely different. This nearly 5,700-word document includes a detailed look at what happened, an apology, a credit to affected customers, as well a commitment to improve its customer communications.

Amazon didn’t say explicitly whether it was human error that touched off the event, but hints at that possibility when it wrote that “we will audit our change process and increase the automation to prevent this mistake from happening in the future.”

The initial mistake, followed by the subsequent increase in network load, exposed a cascading series of issues, including a “re-mirroring storm” with systems continuously searching for a storage space.

Amazon also said in its explanation of the outage that it will work to ensure that it builds software and services that can survive failures.

Matt Stevens, the CTO of AppNeta, a cloud performance network performance management company and an Amazon cloud user, praised Amazon’s postmortem for its transparency. “As a technical architect, I thought it was actually amazing how deep they went into it,” said Stevens, adding that he wished the company had offered more detail about the initial network change that started the problem.

In terms of the overall issue, Stevens said: “How does anybody who runs their own private data centre know how it’s going to hold up until you have a massive issue?”

Jim Damoulakis, CTO of GlassHouse Technologies, an enterprise storage services provider, called it “a pretty through postmortem and I think for the most part they are being transparent about it.”

Damoulakis said that while Amazon will take steps to keep the problem from happening again — and to make their availability zones more robust — customers will ultimately be responsible for having a good disaster recovery plan.

“I think there is blame on both sides,” said Justin Alexander, who heads strategic research and development at Hyland Software, an enterprise content management software firm, referring to both Amazon and its customers.

“Clearly, Amazon needs to take accountability for their services. But at the same time there were a variety of customers who were using the EC2 platform that did not suffer any period of unavailability,” said Alexander, citing their disaster recovery plans.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.