Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Where to start with containers and microservices

Pete Johnson, senior director of product evangelism at CliQr | Oct. 7, 2015
Lessons from Java and virtualization show the way to microservices adoption

This vendor-written piece has been edited to eliminate product promotion, but readers should note it will likely favour the submitter's approach.

Containers get a lot of headlines and it’s clear why. For the daily lives of developers, containers are revolutionary. Much in the same way that interpreted languages enable greater productivity than compiled ones, containers save developers precious time by allowing them to create a functional environment in seconds instead of tens of minutes with virtual machines. Reducing that cycle time lets developers spend more time coding and less time waiting for something to happen.

Architecturally, the microservices that containers more easily enable are equally groundbreaking. The ability to break a problem into smaller pieces is always beneficial in unexpected ways, and containers offer a way of doing that on a scale not possible before.

However, in an enterprise IT ecosystem, it’s rarely all about developer productivity. Similarly, architectural efficiency, while nice to have, is not always a top priority either. For large IT organizations with lots of brownfield applications and established operational processes, going all in on containers is not as quick and easy as it is for a born-on-the-cloud company with no legacy issues to deal with.

That doesn’t mean that containers don’t have a place in modernizing a huge IT shop. In fact, enterprise IT has seen this movie at least twice before, with two technologies that caused tectonic shifts, and found that mixed architectures were the way to bring in significant change at a pace that reduced risk. Those technologies were Java and virtualization.

Waiting for Java

Enterprise IT is typically focused on cost savings, and as such it’s loath to change. If you were a developer in a big IT shop in the early ’90s, the dominant language was C++. Client-server was slowly replacing the monolithic application to better take advantage of networking, and the big challenge of the day was writing your code in such a way that it could run on any flavor of Unix. That last task wasn’t easy because the base operating system varied among HP-UX, AIX, and other Unix variants with myriad different libraries. It was common to have elaborate branching in make files and header files so that the code would compile correctly for each target operating system.

Enter Java. This technology removed from developers the responsibility of understanding the complexity of each operating system and instead put that complexity in the Java virtual machine. Developers compiled their code into Java byte code, which got interpreted by JVMs written to translate those commands into OS-specific library calls.

It’s hard to overstate how revolutionary that idea was or the impact it would have on developer productivity. The problem was, this was an era of bare-metal servers, and operations pros scoffed at the idea of introducing a layer of abstraction at runtime simply to improve the lives of developers. An application would live much more of its life in production than it would in development, so why cater to developer productivity? Eventually, Java would free developers to look at larger issues, including the services-oriented architecture improvements Java promised to bring, but not yet.

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.