Because power efficiency is very important, AMD says an HBM stack will hit 35GBps of memory bandwidth per watt consumed, vs. GDDR5's 10.5GBps. Power efficiency isn't just about mobile applications, either. By using less power to drive the memory, and thus creating less heat, you can take the savings and, say, increase the clocks of the GPU core.
There are several different ways to stack the chips. AMD's approach to HBM is to use a "2.5D" technique using a passive interposer layer. This approach is different than a design method called "3D," where the RAM chips are piled up on the GPU itself. Macri said people should not be be dissuaded that this isn't in three dimensions.
"It is a true 3D design method," he said. "We're not just designing in X and Y any more, we're designing in X, Y and Z."
Macri didn't mince any words when speaking of his chief competitor. Nvidia flouted a chip-stacking technique on its upcoming--but now delayed--Volta GPU as early as 2013. With Volta now delayed, nVidia's first GPU with stacked memory likely won't appear until 2016, when its Pascal GPU ships with a similar 2.5D stacking technique.
"Nvidia creates PowerPoints and talks in advance like they are the wonderful leader of everything," Macri said derisively. "While they're talking, we're working."
Macri said AMD has been the primary driver of HBM memory in the industry.
"We do internal development with partners, we then take that development to open standards bodies and we open it up to the world," he said. "This is how Nvidia got GDDR3 and how they got GDDR5. This is what AMD does. We truly believe building islands is not as good as building continents."
Macri is probably in a position to know: Besides being AMD's Chief Technology Officer, Macri has long been a chairman at JEDEC, the group that blesses memory standards for the industry. AMD did indeed beat Nvidia in introducing graphics cards with GDDR3 and GDDR5, but that always brings concerns over yields with new technology.
Macri said HBM is new, but that doesn't mean people should assume yield issues. Macri wouldn't elaborate on yield from its chief partner in the project, Hynix, but said AMD wouldn't adopt it for a consumer part if it didn't think it could get enough HBM RAM to make GPUs.
HBM vs. Hybrid Memory Cube: Fight? Not.
HBM's performance-to-power ratio is so good and appealing, Macri said he expects the new memory to be adopted in other areas besides big GPUs. We can expect HBM to appear on CPUs with integrated graphics chips, as well as servers and workstations. That would appear to put HBM on a collision course with a similar memory technology Intel and Micron are working on called Hybrid Memory Cube, or HMC. HMC is an advanced stacked memory design, but unlike AMD, Intel and Micron are trying to adopt it without the slow rule-by-committee of JEDEC.
Sign up for CIO Asia eNewsletters.