Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Breaking Moore's Law: How chipmakers are pushing PCs to blistering new levels

Brad Chacos | April 12, 2013
Processor performance increases may have flatlined over the past few years, but the biggest brains in the biz are working on cutting-edge tech to push PCs to blistering new speeds.

"We see that type of performance leap playing within today's power envelope, or you can greatly lower the power envelope and see the same performance [you have today]," Marinkovic says.

AMD has been inching toward a heterogeneous system architecture--as the method of distributing the workload amongst several processors on a single chip is called--in its popular accelerated processing units, or APUs, including the one powering the upcoming PlayStation 4 gaming console. APUs contain traditional CPU cores and a large Radeon graphics core on the same die, as shown in the block diagram above. The CPU and GPU in AMD's next-gen Kaveri APUs will share the same pool of memory, blurring the lines even further and offering even faster performance.

AMD isn't the only chip maker backing the idea of parallel computing. The company was a founding member of the HSA Foundation, a consortium of top chip makers--albeit sans Intel and Nvidia--that are working together to create standards that should hopefully make programming for parallel computing easier in the future.

It's a good thing that industry-leading companies provide the backbone of the HSA Foundation's vision, because in order for the grand heterogeneous future of parallel computing to come to fruition, programs and applications need to be specifically written to take advantage of the hardware designs.

"Software is the key," Marinkovic admits. "When you look at APUs with [full HSA compatibility] and without full HSA, the software will have to change. But it will be a change for the better...Where we want to get to is code-once, and use everywhere. Once you have the HSA architecture across all these different HSA Foundation companies, hopefully you'll be able to write a program for a PC and run it on your smartphone or tablet with some small tweaks or compilation."

You can already find application processing interfaces (APIs) that enable parallel GPU computing, such as Nvidia's GeForce-centric CUDA platform, the DirectCompute API baked into DirectX 11 on Windows system, and OpenCL, an open-source solution managed by the Khronos Group.

Support for hardware acceleration is picking up among software developers, though most of the programs handle intensive graphics in some way. Internet Explorer and Flash are on the bandwagon, for instance. Just last week, Adobe announced it was adding OpenCL support for the Windows version of Premiere Pro. According to representatives, users with AMD discrete graphics card or APUs will be able to tap into that GPU acceleration to edit HD and 4K videos in real time, or export videos up to 4.3 times faster than the base nonaccelerated software.

"I don't think there's any ifs or buts about this," Marinkovic says. "Heterogeneous architectures are the way of the future."

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.