Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Gaming Chips Score in Data Centers

Robert Lemos | March 4, 2011
The same graphics processors that power today's games are finding new homes in data center servers. From seismic processing to financial modeling, today's calculation-intensive data center applications can enjoy dramatic speed-ups thanks to these graphics chips.

Such performance gains were made possible by a dramatic shift in the architecture of graphics processors. Where GPUs used to speed up a fixed pipeline of graphical computations, chip architectures have become more generalized. Now each GPU consists of a large parallel array of small processors, says Patricia Harrell, director of stream computing for AMD (AMD).

"If you look at a graphics processor 10 years ago, you had hardware that was doing something at a fixed step in the pipe line," she says. "Over the years, the hardware became more general purpose and flexible."

Academic research has benefitted tremendously from GPU clusters. Kohlmeyer's team at Temple University now uses a six-GPU cluster in their data center to run many of their simulations, up to 60 times faster than their previous server, allowing them to quickly test new scenarios.

Such small systems could help research group's immensely, NVidia's Gupta says.

"Computing today is a bottleneck for science," he says. "We are not providing enough computing today for scientists and it is slowing down innovation."

GPUs Can't Solve All Problems

With all the advantages for specialized data centers, GPUs will not necessarily solve run-of-the-mill large-scale problems. Calculating large data sets is something at which GPUs excel, but problems that have large data dependencies (and thus a lot of branching instructions) can be problematic.

"The challenge really is that people are used to serial processing, that they have solved the problem and written an algorithm to handle the data sequentially," says NVidia's Gupta.

Reframing problems to run in the massively parallel systems is not easy. Programmers will have to remember techniques that they were told to forget in the 90s, says Kohlmeyer.

"It is not realistic to assume that all applications will run well on GPUs, especially not right now," he says. "To some degree you have to rewrite your software and rethink your strategies for efficient parallelism to get the most performance."

 

Previous Page  1  2 

Sign up for CIO Asia eNewsletters.