Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The CxO guide to microservices

Jim Highsmith, Executive Consultant; Neal Ford, Director, Software Architect, and Meme Wrangle, ThoughtWorks | Jan. 27, 2017
Experts from ThoughtWorks share why senior executives should pay attention to microservices.

Moving from a functional organisational structure to a product or service structure is a growing trend in agile enterprise transformation. Using microservices supports this trend.

Last, because each service is isolated the architecture is both fast and flexible. Changes to services can occur quickly because the scope is small, which give developers new advanced capabilities. Once architects design a system with small, self-contained services, where applications consist of messaging between deployed services, capabilities like multi-variate testing become easy.

For example, the business may be unsure about a future direction for their website. They design two services with similar but distinct capabilities, deploy different versions to different groups of users and harvest the results to drive the direction of future development. Companies like Facebook analyse their users by conducting experiments using this type of A/B testing.

Standardisation has always been a mantra of IT organisations as a way to reduce costs. Unfortunately, it also reduces flexibility-the more standardisation, the less flexibility. With microservices, architects and developers can design applications with a wider variety of technology stacks that closely mirror the complexity of the problem.

The microservices style of architecture is opposite from the way many enterprises deploy software and allocate IT resources. One of the major goals of many architectural styles was to utilise shared resources (operating systems, database servers, application servers, and so on) effectively. Because the cost of resources impacts the bottom line, companies built software architecture to maximise shared resources.

However, shared resources have a downside. No matter how effectively developers build isolation into these servers, contention for those resources always arises. Sometimes components interfere with one another because of dependency management, or sometimes problems arise from two components fighting over some resource, like CPU. Inevitably, shared components interact in undesirable ways.

Containers and Decoupling
In software delivery, there are two critical technical "environments": the development environment where developers work, and a deployment environment which is the domain of IT operations staff. Traditionally, moving code between these two environments has been fraught with technical errors, lengthy time delays, and organisational miscommunication.

A few years ago, something interesting happened: Linux became Good Enough for most enterprises, and Linux variants are commercially free-but that wasn't quite enough to impact architecture.

Next, innovation in open source coupled with agile development techniques encouraged developers to create tools to automate many of the cumbersome manual chores in operations, referred by many as the DevOps revolution.

This brought development teams and IT operations closer together using advanced tools like Puppet, Chef, and Docker. Suddenly, Linux variants were also operationally free, allowing developers the luxury of deploying their components to a pristine operating system with nothing else to interfere. Suddenly, an entire possible class of errors disappears because each component is decoupled from the others.

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.