Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How Docker can transform your development teams

Matthew Heusser | Sept. 1, 2015
Docker’s capability to speed up software testing should make it a no-brainer for any development team. Here’s why.

Docker

Waiting for the right build has been a historical problem with test environments, while differences between development, test and production have caused defects to escape in production. Virtual Machines solve these problems by sharing a copy of system data, but they can be slow and take gigabytes of disk space. 

Enter Docker, a lightweight, fast virtualization tool for Linux. 

The opportunity Docker presents 

First, anyone on a technical staff can create a test environment on the local machine in a few seconds. The new process hooks into the existing operating system, so it does not need to “boot.” With a previous build stored locally, Docker is smart enough to only load the difference between the two builds. 

This kind of simplicity is common for teams that adopt Docker; if the architecture extends to staging and production, then staging and production pushes can also be this simple. 

Another slick feature is the capability to create an entire new, virtual infrastructure for your server farm that consists of a dozen virtual machines, called the “green” build. Any final regression testing can occur in green, which is a complete copy of production. When the testing is done, the deploy script flips the servers, so now green is serving production code. The previous build, the “blue” build, can stick around – in case you need to roll back. That's called blue/green deployment, and it’s possible with several different technologies. 

Docker just makes it easy. 

Why Docker? 

Where Windows-based software compiles to a single installer, Web-based software has a different deliverable: The build running on a server. Classic release management for websites involves creating three or four different layers: development, test, production and sometimes a staging environment. The strategy involves at least one server per layer, along with a set of promotion rules. When the software was ready for the next promotion, the build could be deployed to the next level server. 

Virtual Machines changed all that, allowing the server to create as many different servers as the team has members. That allowed each branch to be tested separately, then merged into the mainline for final testing, without spending tens of thousands of dollars on new hardware. Having a virtual machine each also makes it possible for a developer to debug a production problem on a local machine while a tester re-tests a patch to production on a second machine. A tester checks for regressions with the release about to go out, while another five testers test features for the next release, and five developers work on new features in new branches. 

The problem with virtual machines is size and speed. Each VM contains an entire host operating system, and creating a virtual machine means allocating gigabytes of space, creating an entire new operating system, then installing the "build" onto that operating system. Even worse, the operating systems runs in application space on your computer – it is literally like having an operating system inside of the host operating system. The boot/install process for a virtual machine can take anything from several minutes to an hour, which is just enough to interrupt flow. Technical staff will likely only be able to host one or two virtual machines on a desktop without a serious loss of speed; trying to get virtual machines created on the network on-demand is an entire "private cloud computing" project. 

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.