Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Where to start with containers and microservices

Pete Johnson, senior director of product evangelism at CliQr | Oct. 7, 2015
Lessons from Java and virtualization show the way to microservices adoption

Countdown to virtualization

Amazingly, IT had a very similar argument when virtualization became available around 10 years after Java. Before VMs, applications ran on bare-metal servers that were sized to handle estimated peaks that often didn’t materialize. However, there were bigger penalties imposed on the IT staff for guessing wrong on the low side of capacity needs (like getting fired) than there were for guessing high (hardware sat around underutilized).

Then IT budgets got tight after the dot-com-bubble years, and IT management noticed all these bare-metal servers sitting around running at 25 percent capacity or typically much less. “Why not run multiple applications on the same physical hardware so we can save costs?” they would ask.

Well, when Application A is running on the same bare-metal server as Application B and experiences a memory leak, suddenly Application B suffers performance issues -- not because it was doing anything wrong but because it was colocated with a noisy neighbor. If only there were a technology that allowed applications to share the same hardware, but with some sort of boundary that could reduce the noisy-neighbor phenomenon, then utilization could be improved.

Enter virtualization. This technology solved this problem, but like Java before it, it introduced a level of abstraction between the application and the hardware that made operations pros wince. Like Java, virtualization helped developer productivity when a VM could be made available in minutes instead of waiting weeks or months for new physical hardware. Like Java, virtualization was initially a hard sell despite obvious benefits -- in this case, the ability to automate the creation of VMs to replicate environments or create more capacity to meet unexpected changes in demand.

A paradigm for containers

How did Java and virtualization eventually break through? The adoption problem for both was solved through mixed architectures. As a compromise, it was common for ops pros to allow a Web tier to run entirely in Java but have the physical load balancers under their control and run databases on bare metal hardware. Eventually, after understanding the huge flexibility that a Java-based model gave even them, ops pros relented on other layers of the architecture, and service-oriented architecture became a reality in most shops.

When virtualization came along, the same mixed architecture pattern emerged. First, development and test workloads became virtualized. Then, like Java before it, virtualization became attractive for Web tiers because users could rapidly add resources to what was typically the bottleneck at times of high demand. That solved operations’ pressing problem on the Web layer but still allowed those same databases and load balancers to stay protected.

Container adoption is following a similar paradigm in enterprise IT today. Ops pros need not start with container technology by deploying a three-tier Web application on a single VM running all three tiers within containers. Instead, they can begin with a first step of running the Web farm on a single VM with multiple containers and letting those containers communicate with existing load balancing and database farms. That gives operations a way to get comfortable with this container revolution we are now experiencing and to begin experimenting with deploying, scaling, and evolving microservices.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.