The company has been showing off a prototype computer designed to emulate the way the brain makes calculations. It's based on a new architecture that could define how future computers work.
The brain can be seen as an extremely power-efficient biological computer. Brains take in a lot of data related to sights, sounds and smell, which they have to process in parallel without lagging, in terms of computation speed.
HPE's ultimate goal is to create computer chips that can compute quickly and make decisions based on probabilities and associations, much like how the brain operates. The chips will use learning models and algorithms to deliver approximate results that can be used in decision-making.
HPE is testing its brain-like computing model through a prototype system with circuit boards and memory chips. The computer, which was shown for the first time at the Discover conference held recently in Las Vegas, is designed to operate in a way that the brain’s neurons and synapses work.
HPE's researchers are keen on bringing the parallelism of brain activity to wired circuitry.
"We're mimicking that architecture of parallel computation using our memristor technology and a specially designed architecture," said Cat Graves, scientific researcher at Hewlett Packard Labs.
Memristors are a new type of storage and memory that could help future AI systems understand data and make more use of it. That's different from today's SSDs and DRAM, which just store data. Like with synapses, the learning and retention on memristor circuits are determined by current and data flow characteristics.
In brains, data is stored in specific neurons or cells, and calculations for tasks like image or speech recognition take place in those cells.
"This has the potential to be incredibly more power efficient, save a lot of time, reduce computing complexity and not be clogging up the bandwidth," Graves said.
In traditional computing, data has to leave storage cells for processing via CPU and memory, which can waste valuable computing resources. HPE's architecture is the opposite -- computation takes place in cells where data is stored, like in brains. Then connections are established between cells, much like synapses.
With such a structure, calculations can be highly parallel, Graves said. Such calculations, called "vector matrix multiplications," lie at the heart of computationally intensive algorithms and applications like image filtering, speech recognition and deep-learning systems, Graves said.
Sign up for CIO Asia eNewsletters.