Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How cloud computing puts adverse selection in its place

Bernard Golden | Oct. 31, 2013
For years, operations departments have used adverse selection principles to allocate resources, often deeming small projects unworthy of enterprise computing power. Today, though, the cloud makes computing so cheap that there's no reason to deny any project, no matter how small. Doing so will simply push users to the public cloud -- and beyond IT's control.

My last blog post used the phrase policies vs. permissions to address one of the thorniest, and most difficult, aspects of cloud computing. Automation, especially as represented by resource user self-service, is one of the key underpinnings of cloud computing.

As I noted in a recent piece on agility, its cloud computing's automation that makes agility economic. Absent automation, agility would be an unaffordable pipedream.

I find fascinating the feedback I often get from readers and audiences. On this topic, it can essentially be boiled down to this: Automation is fine for simple applications, but when you get to "real" production applications, skilled tuning is necessary to ensure sufficient performance and response times. This can be understood as a statement that operations talent is still required, even in the world of cloud computing.

This feedback is often communicated in rather condescending terms, as though I'm unable to comprehend the weighty complexities that operations groups must deal with each day in their heroic efforts to keep the IT factory humming.

Actually, I'm pretty familiar with the skills required to tune the complicated admixture of infrastructure, hardware, middleware, application software and configurations that affect the performance and uptime of a complex application. During one part of my career, I worked at a database software company; we devoted enormous effort to helping customers wring sufficient performance from their applications. My group was responsible for the networking portion of the architecture and we with mind-numbing details of packet configurations and software stack performance analysis.

I'm never really sure how to respond to those who raise the issue of managing complex applications. They're undoubtedly correct. There's no question that some applications require skilled talent to diagnose and treat performance bottlenecks to assure optimum application health.

Operations Can't Put All Its Eggs in Big Baskets
The question, really is, this: Of a company's total application portfolio, what proportion is represented by such complex applications, and what proportion is represented by less complex applications that can be fully satisfied by self-service, less capable computing environments?

This is a critical issue for those who assert that simple self-service environments are insufficient to address application needs. Many who criticize my advocacy of self-service seem to imply that things need to remain as they have traditionally been - that application groups request resources, and operations groups manually provision those resources, making them available once the provisioning process is complete. This undoubtedly enables skilled operations personnel to perform custom configuration and tuning, thereby assuring applications can achieve optimum performance.

The problem with that approach is that, in the phrase of Clayton Christensen, it overserves those who don't require such complex capability. Someone who needs a virtual machine to perform a quick prototype of a website doesn't need someone to analyze database throughput and total network traffic.


1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.