Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How to get more out of your virtualized and cloud environments

Brandon Butler | Dec. 2, 2014
Matt Eastwood, general manager of IDC's enterprise platform group estimates the typical enterprise server runs 10 to 12 virtual machines today at about 30% to 40% capacity but an optimal server utilization rate is usually around 60% to 70%.

After Dammions Darden arrived as the new senior systems administrator for the city of Garland, Texas, he knew that the 50 to 60 physical hosts for this 234,000-person city outside of Dallas were not running nearly as efficiently as they could be. Some had excess capacity, others were running way too hot.

Traditionally if apps are slow and virtual machines need more memory the easy answer is an unfortunate one: Throw more hardware at the problem. But Darden wasn't satisfied with that. While roaming the expo floor at VMworld two years ago he stumbled across VMTurbo, a company that specializes in analyzing virtual environments.

Using VMTurbo to gain insight into what was happening in the virtual realm, the city of Garland found it could ratchet up the VM load on some machines dramatically, going from 20 to 25 VMs per host to 40 to 45 on some servers. That consolidation freed up hosts that could be used to support other initiatives. The city, for example, was considering a virtual desktop environment but was worried about hardware costs, and suddenly Darden had servers to host the deployment.

It's a lot easier to just keep adding hardware, but there's a better way, Darden says.

Matt Eastwood, general manager of IDC's enterprise platform group agrees. He estimates the typical enterprise server runs 10 to 12 virtual machines today at about 30% to 40% capacity. An optimal server utilization rate is usually around 60% to 70%, meaning many servers could easily handle twice the VM load. With an explosion of VMs on the horizon — IDC predicts the number of VMs to increase 130% in the next four years — some shops will buy more hardware to increase capacity. But experts say smart organizations will first optimize their existing environments.

It's a people problem

How do organizations end up with less than optimal systems in the first place? "The things that cause inefficiency in servers, systems administration and cloud management generally have to do with manual, disconnected and fragmented processes more than hardware," says Mary Johnston Turner, an IDC analyst who specializes in management software. "The real way to improve IT operations is to adopt a more integrated, standardized and automated management processes that covers the life cycle of services offered."

Doing so is not easy. Johnson says going from an environment where resources are requested and delivered ad-hoc to having a fully automated and self-service system where users can request and consume what they need is a transformational shift. It can take a lot of time and effort to set it up, but the payoff will come with a more well-oiled operation.

Automating a server lifecycle environment could save 10% to 15% in both hard and soft costs meaning actual dollars saved and time freed up, she says. Increasing server utilization is great, but if it takes weeks for a business unit to get access to a VM it has requested, then it doesn't matter how efficiently the server is running.

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.