Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How DevOps can accelerate the cloud application lifecycle

Bernard Golden | Feb. 12, 2014
In the past, infrastructure deployment and application updates both slowed the development lifecycle. Now that cloud computing lets organizations provision resources in minutes, not months, it's time to alter the application lifecycle accordingly. DevOps can help -- but only if it extends beyond 'culture change' to actually achieve continuous deployment.

Continuous Deployment Needs Automation, Integration, Aligned Incentives
To achieve continuous deployment, four things are necessary.

A streamlined, automated process. If the process includes milestone reviews by a committee, process halts while someone approves further progress, or any kind of manual oversight for routine releases, then nothing is going to make the system move faster. Asserting that occasional changes shouldn't go through automatically, whether due to complexity or some other reason, isn't the same as proving that every change needs manual intervention. It just means you need to implement a process that supports intervention as required while enabling automatic pass-through for routine changes.

An integrated, end-to-end toolchain. Obviously, unless there are supporting tools underlying the process, the automated workflow is pointless. Essentially, the output of the tools in one phase has to be handed off to tools in the next phase, which means the tools have work together. Today, your choices for a DevOps toolchain tend to be either expensive proprietary offerings from large vendors or homegrown stitched-together collections of open source components. Each approach has its strengths as well as drawbacks. With the interest in the area, and its increasing importance, one can be sure additional flexible, affordable solutions will come to the marketplace shortly.

Shared application artifacts. Those joined-up tools have to pass along artifacts that are common to all groups using them. Re-creating executables and configurations, even from an automated runbook, creates additional work, presents the opportunity for mistakes to creep in and is bound to impede rapid functionality availability. It's far better to use a single set of artifacts and pass them among groups, adding and subtracting permissions to enforce organizational partitioning as appropriate.

Aligned metrics and incentives across the organization. With incentives, we return to the topic of culture. As indicated, I don't really believe that putting different groups together in the hope that, by developing personal relationships, friction and mistakes will vanish. For groups to work together productively, they must share a single set of metrics and incentives. If one group is measured by how frequently it releases updated functionality, and another group is measured by operational stability, there will be conflict over what gets done.

You need to implement a single set of measures so that everyone can pull together. Don't think that this can be accomplished by creating the union of all previous metrics, as combining a requirement for frequent updates with the need for operational stability will just create conflict within one group rather than across multiple groups.

The obvious objections to these recommendations is that they're hard and will cause a great deal of disruption. That's absolutely true - and if we were living in the IT world of a decade ago, there would be no need and no point in implementing such disruptive measures.


Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.