Virtualization is key in rendering cloud services via GPUs, but the VGX improvements could be ahead of their time, said Jim McGregor, principal analyst at Tirias Research.
"This is overkill for what people need and not everyone will make use of the resource," McGregor said.
Nvidia is offering a new set of server products and graphics boards under the brand name Grid as the company connects GPUs to the growing number of virtualization and cloud deployments. Nvidia offers the Grid Visual Computing Appliance (VCA), which does server-side processing of multimedia and other applications for cloud-based delivery to virtual desktops on thin clients, PCs or tablets. The company has also partnered with server makers IBM and Dell to offer GPU-rich Grid servers. Nvidia also said Cisco will start shipping the VGX Grid server called UCS C240 M3 starting this month.
But GPUs are already becoming more practical in servers with gaming going online and more applications being written using parallel programming tools like OpenCL and CUDA, McGregor said.
Servers deal with different types of workloads and GPUs still require CPUs to function in distributed computing environments. Instructions to the GPU are funnelled through the CPU.
"Using [GPUs] as a processor architecture in the cloud is no different than using a [CPU] or custom processor," McGregor said.
Nvidia and AMD are designing chips and establishing open standards that make GPUs a more accessible resource. The AMD-led HSA (Heterogeneous System Architecture) Foundation has introduced a uniform memory architecture called HUMA, which will make different memory types accessible to all processors. Nvidia's next graphics processor called Maxwell, which is due next year, will also pool together CPU and GPU memory.
Technologies like VGX make GPUs more relevant in server environments, McGregor said.
"It has a solid road map," McGregor said.
Sign up for CIO Asia eNewsletters.