Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Serverless computing’s future is now – and why you should care

By Peter Horadan, EVP of Engineering and CTO, Avalara | Feb. 2, 2017
Instead of allocating virtual machines in the cloud, you upload functions and let the IaaS service provider figure out how to deploy, run and scale those functions

Although vendor-written, this contributed piece does not advocate a position that is particular to the author’s employer and has been edited and approved by Network World editors.

Serverless computing, a disruptive application development paradigm that reduces the need for programmers to spend time focused on how their hardware will scale, is rapidly gaining momentum for event-driven programming. Organizations should begin exploring this opportunity now to see if it will help them dramatically reduce costs while ensuring applications run at peak performance.

For the last decade, software teams have been on a march away from the practice of directly managing hardware in data centers toward renting compute capacity from Infrastructure as a Service (IAAS) vendors such as Amazon Web Services (AWS) and Microsoft Azure. It is rare that a software team creates unique value by managing hardware directly, so the opportunity to offload that undifferentiated heavy lifting to IaaS vendors has been welcomed by software teams worldwide.

The first wave of moving to IaaS involved replicating data center practices in the cloud. For example, a team that had 10 machines in its data center might create 10 VMs in an IaaS and copy each server to the cloud one by one. This worked well enough, but it didn’t take long for the industry to realize that IaaS is not just a way to offload hardware management. Instead, it is a fundamentally different way to build applications, offering far greater opportunities.

Serverless computing is the next step in this journey. With serverless computing, rather than allocating virtual machines and deploying code to them, the software team just uploads functions and lets the IaaS vendor figure out how to deploy and run those functions. The IaaS provider is also responsible for scaling the infrastructure so functions perform as expected no matter how frequently they are called. All the software team has to worry about is writing the code and uploading it to the IaaS vendor.

The promise of serverless computing is to allow teams to entirely stop thinking about the machinery the code runs on: how many machines are needed at peak times, whether those machines have been patched, whether the machines have the right security settings, and so on. Instead, the team just focuses on making the code great, while the IaaS vendor is responsible for running it at scale.

As a practical example, consider an application that allows users to upload photographs for automatic redeye removal. If the team manages its own hardware and the number of servers dedicated to the application is over-specified – and relatively few photos are uploaded – then the servers spend most of their time idle, a significant waste of resources. However, if the number of servers is under-specified, users will experience significant delays during peak usage. While auto-scaling services are available, they take extra effort to manage. Serverless computing eliminates all these concerns.

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.