Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

VMware causes second outage while recovering from first

Jon Brodkin, Network World | May 2, 2011
VMware's attempt to recover from an outage in its brand-new cloud computing service inadvertently caused a second outage the next day, the company said.

VMware's attempt to recover from an outage in its brand-new cloud computing service inadvertently caused a second outage the next day, the company said.

VMware's new Cloud Foundry service -- which is still in beta -- suffered downtime over the course of two days last week, not long after the more highly publicized outage that hit Amazon's Elastic Compute Cloud.

Cloud Foundry, a platform-as-a-service offering for developers to build and host Web applications, was announced April 12 and suffered "service interruptions" on April 25 and April 26.

The first downtime incident was caused by a power outage in the supply for a storage cabinet. Applications remained online but developers weren't able to perform basic tasks, like logging in or creating new applications. The outage lasted nearly 10 hours and was fixed by the afternoon.

But the next day, VMware officials accidentally caused a second outage while developing an early detection plan to prevent the kind of problem that hit the service the previous day.

VMware official Dekel Tankel explained that the April 25 power outage is "something that can and will happen from time to time," and that VMware has to ensure that its software, monitoring systems and operational practices are robust enough to prevent power outages from taking customer systems offline.

With that in mind, VMware began developing "a full operational playbook for early detection, prevention and restoration" the very next day.

"At 8am [April 26] this effort was kicked off with explicit instructions to develop the playbook with a formal review by our operations and engineering team scheduled for noon," Tankel wrote. "This was to be a paper only, hands off the keyboards exercise until the playbook was reviewed. Unfortunately, at 10:15am PDT, one of the operations engineers developing the playbook touched the keyboard. This resulted in a full outage of the network infrastructure sitting in front of Cloud Foundry. This took out all load balancers, routers, and firewalls; caused a partial outage of portions of our internal DNS infrastructure; and resulted in a complete external loss of connectivity to Cloud Foundry."

The second-day outage was the more serious of the two.

"This was our first total outage, which is an event where we need to put up a maintenance page," Tankel continued. "During this outage, all applications and system components continued to run. However, with the front-end network down, we were the only ones that knew that the system was up. By 11:30 a.m. PDT the front end network was fully operational."

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.