High-performance chips typically focus on double-precision performance for more accurate calculations, but Knights Mill is designed differently. The chip's cores focus on "low precision calculations," which can be stringed together for approximations that can help the chip make a decision. The low-precision calculations help create powerful, and power-efficient, neural network clusters.
The Knights Mill design also brings more floating point performance to calculations, which is important in machine learning, Waxman said.
Intel is advancing its AI roadmap at a frantic pace, and Knights Mill is a leap forward, Waxman said.
Many machine learning models are being used in data centers. Beyond its homegrown software stack, Intel could make Xeon Phi compatible with different machine-learning models like the open-source Caffe and Google's TensorFlow.
Intel has shown a willingness to collaborate. Intel is working with Baidu on a "Deep Speech" speech recogntion technology with its Xeon Phi platform, Waxman said.
Sign up for CIO Asia eNewsletters.