In 2004, Hewlett-Packard Enterprise's Kirk Bresniker set out to make radical changes to computer architecture with The Machine and drew out the first concept design on a whiteboard.
At the time Bresniker, now chief architect at HP Labs, wanted to build a system that could drive computing into the future. The goal was to build a computer that used cutting-edge technologies like memristors and photonics.
It's been an arduous journey, but HPE on Tuesday finally showed a prototype of The Machine at a lab in Fort Collins, Colorado.
It's not close to what the company envisioned with The Machine when it was first announced in 2014 but follows the same principle of pushing computing into memory subsystems. The system breaks the limitations tied to conventional PC and server architecture in which memory is a bottleneck.
The standout feature in the mega server is the 160TB of memory capacity. No single server today can boast that memory capacity. It has more than three times the memory capacity of HPE's Superdome X.
The Machine runs 1,280 Cavium ARM CPU cores. The memory and 40 32-core ARM chips -- broken up into four Apollo 6000 enclosures -- are linked via a super fast fabric interconnect. The interconnect is like a data superhighway on which multiple co-processors can be plugged in.
The connections are designed in a mesh network so memory and processor nodes can easily communicate with each other. FPGAs provide the controller logic for the interconnect fabric.
Computers will deal with huge amounts of information in the future and The Machine will be prepared for that influx, Bresniker said.
In a way, The Machine prepares computers for when Moore's Law runs out of steam, he said. It's becoming tougher to cram more transistors and features into chips, and The Machine is a distributed system that breaks up processing among multiple resources.
The Machine is also ready for futuristic technologies. Slots in The Machine allow the addition of photonics connectors, which will connect to the new fabric linking up storage, memory, and processors. The interconnect itself is an early implementation of the Gen-Z interconnect, which is backed by major hardware, chip, storage, and memory makers.
HPE is improving the memory subsystem and storage in PCs and servers, which is giving a boost to computing. While data is being processed faster inside memory and storage, it reduces the need to speed up instructions-per-clock in CPUs.
In-memory computing has sped up applications like databases and ERP systems, and HPE is blowing up the design of such systems. There's also a move to decoupling memory and storage from main servers. That helps speed up computing and makes more efficient use of data center resources like cooling.
Sign up for CIO Asia eNewsletters.