The term "microservices" might be relatively new, but the practice of breaking applications into single-function, discrete services has gone on for years -- long enough for best practices to emerge over time.
Recently I spoke with Owen Garrett, head of product for Nginx, an open source company whose Web server powers roughly one in five of the top one million websites. Nginx is also major player in the Docker ecosystem, the rampant growth of which has been accelerated by the microservices trend. As Garrett notes, the Nginx Docker image is one of the most downloaded images on Docker Hub, the go-to repository for prepackaged Docker apps and components.
Garrett has had a unique opportunity to witness how Nginx is being used in microservices architecture across a broad range of customer deployments. When asked for a good illustration of microservices architecture in action, however, Garrett picks the familiar example of Amazon.com:
When you go to Amazon.com and type in "Nike shoe," over 170 individual applications get triggered potentially from that search -- everything from pricing to images of the shoes to reviews of the shoes to recommendations of other products you may want to purchase. Each of those were individual services or subfeatures, if you like, of an application or an overarching experience, and all those were connected via HTTP. Each might be built in different languages. Each of those may have different requirements in terms of the data store, in terms of scaling and automation. Those were the attributes that we saw that were the fundamental anatomy of microservices architecture.
Microservices architecture is a direct descendant of SOA (service-oriented architecture) and is often described in similar terms: The simple REST protocol for APIs replaces SOA's complex SOAP and microservices tend to be more granular, but the general notion of assembling applications from services remains the same. Garrett, however, zeros in on another important difference: SOA required "heavyweight middleware" such as ESBs (enterprise service busses), which microservices architecture rejects:
What we're seeing in terms of traffic flow ... is that Layer 7 traffic, HTTP, should naturally and natively live within the application, not within the network. One of the things that Nginx has been able to deliver for developers is control. In the past, they had a bottleneck, where everything they needed to do in terms of bringing on these services or configurations had to go through a network engineer. With Nginx they can manage that traffic within their own application themselves. They can load balance; they can do AB testing. They can send ghost traffic to a mobile version of the app to test its performance. All of that is now becoming part of this development environment and under the accountability and authority of the development team.
Sign up for CIO Asia eNewsletters.