Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Intel's data centre chief talks machine learning -- just don't ask about GPUs

James Niccolai | June 3, 2016
The company's Xeon Phi chip can accelerate AI workloads just like a GPU, Intel says.

Diane Bryant
Diane Bryant, the head of Intel's data center business, at the Computex trade show in Taipei on June 1, 2016 Credit: James Niccolai

If you want to get under Diane Bryant’s skin these days, just ask her about GPUs.

The head of Intel’s powerful data center group was at Computex in Taipei this week, in part to explain how the company's latest Xeon Phi processor is a good fit for machine learning.

Machine learning is the process by which companies like Google and Facebook train software to get better at performing AI tasks including computer vision and understanding natural language. It’s key to improving all kinds of online services: Google said recently that it's rethinking everything it does around machine learning.

It requires a massive amount of computing power, and Bryant says the 72 cores and strong floating point performance of Intel’s new ‘Knight’s Landing’ Xeon Phi, released six months ago, give it an excellent performance-per-watt-per-dollar ratio for training machine learning algorithms.

“It’s a big opportunity, and there will be a hockey stick where every business will be using machine learning,” she said in an interview.

The challenge for Intel is that the processors most widely used for machine learning today are GPUs like those from Nvidia and AMD.

“I’m not aware that any of the Super Seven have been using Xeon Phi to train their neural networks,” said industry analyst Patrick Moorhead, of Moor Insights and Strategy, referring to the biggest customers driving machine learning – Google, Facebook, Amazon, Microsoft, Alibaba, Baidu and Tencent.

Bryant, who is very affable, grew mildly exasperated when asked how Intel can compete in this market without a GPU. The general purpose GPU, or GPGPU, is just another type of accelerator, she said, and not one that’s uniquely suited to machine learning.

“We refer to Knights Landing as a coprocessor, but it’s an accelerator for floating point operations, and that’s what a GPGPU is as well,” she said.

She concedes that Nvidia gained an early lead in the market for accelerated HPC workloads when it positioned its GPUs for that task several years ago. But since the release of the first Xeon Phi in 2014, she says, Intel now has 33 percent of the market for HPC workloads that use a floating point accelerator.

“So we’ve won share against Nvidia, and we’ll continue to win share,” she said.

Intel’s share of the machine learning business may be much smaller, but Bryant is quick to note that the market is still young.

“Less than 1 percent of all the servers that shipped last year were applied to machine learning, so to hear [Nvidia is] beating us in a market that barely exists yet makes me a little crazy,” she says.

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.