Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

HP and Intel push HPC into new markets

James Niccolai | July 14, 2015
Hewlett-Packard is using new processor and network technologies from Intel to build server systems that aim to expand high-performance computing into new markets, including big data workloads at large enterprises.

It expects to deploy the new Intel technologies up and down its HPC line. For its high-end Apollo 6000 and 8000 systems it will use an Omni-Path "switch blade" that sits inside the server enclosure, allowing it to get maximum performance. For the Apollo 2000 at the low end, it will probably offer an external top-of-rack switch, though it's still figuring that out, Mannel said.

One emerging area that can benefit from HPC is deep learning, a type of advanced computing used by Google and Facebook for tasks like image recognition and natural language processing.

Other industries are starting to see value in deep learning, according to Mannel, including large retailers who are using it to identify patterns in customer buying behavior, and police departments that can use it for facial recognition.

Hospitals are also using deep learning for "precision medicine," he said, where they analyze the genome to figure out more precise treatments and therapies for patients. "It's big data, but it's also about performance. You want to get results quickly so you can start treatments quickly."

One challenge is that few businesses have expertise with HPC clusters, so Intel and HP are opening a new center in Houston where potential customers can work with them to figure out which applications are suitable for the new architecture and build proof-of-concept systems. The Texas center will complement an existing one in Grenoble, France.

Software also needs to be ported to take advantage of Xeon Phi's many cores. Intel didn't do a great job providing tools for that with earlier versions of the chip, Wuischpard admitted, and this time around it will have an improved SDK with scripts, sample code and more, he said. It will talk more about that at Intel's IDF conference in September.

The new HPC technologies are part of what Intel calls its "scalable systems framework." In April, it said the framework will be used to build a supercomputer called Aurora for the Department of Energy that will be delivered in 2018 and have a peak performance of 180 petaflops.

"Rather than building just big, one-off systems, the idea is to take those technologies and learnings and make sure they're available across the entire industry," Wuischpard said.

 

Previous Page  1  2 

Sign up for CIO Asia eNewsletters.