The concept of containers isn’t new. Google has been using their own variety for years (they say that everything at Google runs in containers), Sun introduced a form of containers in Solaris in 2004/2005, and containers have even been available on Windows through products such as Parallels Virtuozzo.
What is new, however, is the shift to thinking of containers as being a developer (rather than Infrastructure) technology and, critically, the emergence of software such as Docker, which provides a single container format that can operate across multiple hardware and OS types.
Enthusiasm in the developer community is high and evolution of both Docker and standardization efforts such as the Open Container Initiative continue at pace, but management tooling for large scale container deployments (such as Kubernetes) is only just beginning to emerge for general use and certainly has not yet reached the degree of maturity of that available for server virtualization.
Does this mean that containers should be avoided for now?
No. Containers offer benefits both to Infrastructure (further workload consolidation, but potentially with a reduction in OS license count) and development (single deployable artifact that runs wherever it is put and starts instantly – especially important for those building dynamic scale-out applications). Containers are complementary to server virtualization and will not (and should not) displace it.
What enterprises should be doing, however, is building partnerships across Infrastructure and development teams to pilot the use of containers on top of robust virtualization platforms. Start small, evolve the hosting platform, management tooling and, critically, the overall process together. Waiting just means that more proactive competitors will get the productivity, time to market and cost reduction advantages first.
Sign up for CIO Asia eNewsletters.