Still, 1 percent of the worldwide server market is not trivial, and Intel will continue to evolve Xeon Phi to make it better at machine learning tasks.
It's not without customers in the area, though it can't point to household names. Bryant mentioned Viscovery, which is using Knights Landing to train algorithms for video search.
There are two aspects to machine learning, she notes – training the algorithmic models, which is the most compute intensive part, and applying those models to the real world in front-end applications, often called inferencing.
Intel’s FPGAs, acquired from its Altera acquisition, coupled with its regular Xeon processors, are well suited to the inferencing part, Bryant says, so Intel has both sides of the equation covered.
Still, it may have a hard time displacing GPUs at the hyperscale companies – not to mention Google’s TPU, or Tensor Processing Unit, a chip that company built specifically for machine learning.
Nvidia’s GPUs are harder for programmers to work with, Moorhead said, which could work in Intel’s favor, especially as regular businesses start to adopt machine learning. And Knights Landing is "self-booting," which means customers don't need to pair it with a regular Xeon to boot an OS.
But Intel’s newest Xeon Phi has a floating point performance of about 3 teraflops, Moorhead said, compared to more than 5 teraflops for Nvidia’s new GP100.
“You could beef up the floating point on Knights Landing and have something that looks like a GPU, but that’s not what it is right now,” he said.
Still, Intel is persistent, and it’s determined to succeed. “We’ll continue to advance the product line, and we will continue to take share,” Bryant said.
Sign up for CIO Asia eNewsletters.