Inferencing chips are also used in data centers to boost deep learning models. Google has created its own chip called TPU (Tensor Processing Unit), and other companies like KnuEdge, Wave Computing, and GraphCore are creating inferencing chips.
IBM is working on a different model for its inferencing hardware and software, Gupta said. He did not provide any further details.
The software is the glue that puts IBM's AI hardware and software in a cohesive package. IBM has forked a version of the open-source Caffe deep-learning framework to function on its Power hardware. IBM is also supporting other frameworks like TensorFlow, Theano, and OpenBLAS.
The frameworks are sandboxes in which users can create and tweak parameters of a computer model that learns to solve a particular problem. Caffe is widely used for image recognition.
Sign up for CIO Asia eNewsletters.