According to Chris Walker, vice president of Intel’s Client Computing Group, Intel had a problem: AAA gaming PCs were selling, customers were interested in VR, but notebooks with the graphics horsepower to run them were thick and heavy. Customers, though, were seeing growth in two-in-one PCs and even thinner thin-and-light PCs. This was what Walker called Intel’s “portability obstacle:” How could it bring top-tier performance to notebooks that didn’t weigh a ton?
The answer, as it turned out, was the EMIB, a small sliver of silicon to bridge discrete logic cores within a single chip package. Intel had originally developed the EMIB as an alternative to what’s known as a silicon interposer, the “floor” or “foundation” of a multichip module. The problem with an interposer is that, like a floor, it needs to cover the entire space underneath the module, making it expensive to manufacture. EMIBs are more like small connectors that dip into the substrate. Intel found that they worked for its Altera programmable logic line as well as its more mainstream PC microprocessor designs. In fact, this is the first consumer use of EMIB, executives said.
This slide, taken from an Intel presentation, shows how Intel believes its Embedded Multi-die Interconnect Bridge is more cost-effective for connecting chips than methods that use an interposer, and far higher in performance than Multi-Chip Package designs.
Intel’s EMIBs, though, allowed another important advantage: modularity. Originally, Intel positioned EMIBs as a tool to mix and match chips using different process technologies. When designing a programmable chip, adding in third-party logic cores is somewhat common. Within integrated logic as complex as a microprocessor’s, though, it’s nearly unheard of. The EMIB allowed for a compromise, placing CPU, GPU, and memory in close proximity without being part of the same actual design.
That paid off almost immediately. Intel’s still being cagey on all the benefits of the Core-Radeon module that EMiB enabled, but the company revealed two. According to Walker, the module stripped out a whopping 1900 square millimeters (2.9 sq in.) from a more traditional motherboard, where the processor, discrete GPU, and memory were laid out next to one another. (Put another way, the EMIB layout consumes just half of the typical board space, Intel says.) Second, the module uses about half the memory power of a traditional design.
An example of the space savings Intel achieved by moving the CPU, Radeon GPU, and HBM inside the processor package.
Software, drivers are critical for managing power
That’s important, because heat naturally becomes more of an issue as notebooks become thinner. Intel added what it calls a new power-sharing framework to the module, consisting of a new connection between the processor, GPU, and memory. Just as a system can manage the processing workload between the three components, the new power framework can do the same for power management.
Sign up for CIO Asia eNewsletters.