Google, Amazon, and Facebook can magically recognize images and voices, thanks to superfast servers equipped with GPUs in their mega data centers.
But not all companies can afford that level of resources for deep learning, so they turn to cloud services, where servers in remote data centers do the heavy lifting.
Microsoft has made such cloud services trendy with Azure and is one of the few companies offering remote servers with GPUs, which excel in machine-learning tasks. But Azure uses older Nvidia GPUs, and it now has competition from Nimbix, which offers a cloud service with faster GPUs based on the Nvidia's latest Pascal architecture.
After renting time on the cloud service, customers get a virtual machine with access to bare-metal server hardware. Nimbix offers customers cloud services that run on Tesla P100s -- which are among Nvidia's fastest GPUs -- in IBM Power S822LC servers.
There are other advantages to the cloud service. On the server side, a high-speed NVLink interconnect links the GPU to the CPU and other components at speeds 2.5 times faster than PCI-Express 3.0.
Microsoft's Azure offers cloud services with servers running Nvidia's Tesla K80, which is based on the older Kepler architecture, and Tesla M40, which is based on Maxwell, a generation behind Pascal.
Typically, machine-learning tasks use many servers, and faster CPUs and GPUs return faster results. The IBM Power S822LC servers have two Power8 CPUs, four Pascal GPUs, and half a terabyte of system memory.
The Nimbix cloud service is targeted more toward high-performance computing applications. It also supports Caffe, Torch, and Theano deep-learning frameworks. Pricing starts at US$5 per hour.
Sign up for CIO Asia eNewsletters.