For years, Microsoft has been delivering speedy and accurate Bing results with experimental servers called Project Catapult, which have now received an architectural upgrade.
The Catapult servers use reprogrammable chips called FPGAs (field programmable gate arrays), which are central to delivering better Bing results. FPGAs can quickly score, filter, rank, and measure the relevancy of text and image queries on Bing.
Microsoft has now redesigned the original Catapult server, which is used to investigate the role of FPGAs in speeding up servers. The proposed Catapult v2 design is more flexible in circumventing traditional data-center structures for machine learning and expands the role of FPGAs as accelerators.
Microsoft presented the Catapult v2 design for the first time earlier this month at the Scaled Machine Learning conference in Stanford, California.
Microsoft's data centers drive services like Cortana and Skype Translator, and the company is constantly looking to upgrade server performance. Microsoft is also working with Intel to implement silicon photonics, in which fiber optics will replace copper wires for faster communications between servers in data centers.
Catapult v2 expands the availability of FPGAs, allowing them to be hooked up to a larger number of computing resources. The FPGAs are connected to DRAM, the CPU, and network switches.
The FPGAs can accelerate local applications, or be a processing resource in large-scale, deep-learning models. Much like with Bing, the FPGAs can be involved in scoring results and training of deep-learning models.
The new model is a big improvement from the original Catapult model, in which FPGAs were limited to a smaller network within servers.
The Catapult v2 design can be used for cloud-based image recognition, natural language processing, and other tasks typically associated with machine learning.
Catapult v2 could also provide a blueprint for using FPGAs in machine learning installations. Many machine learning models are driven by GPUs, but the role of FPGAs is less clear. Baidu has also used FPGAs in data centers for deep learning.
FPGAs can quickly deliver deep-learning results, but they can be notoriously power hungry if not programmed correctly. They can be reprogrammed to execute specific tasks, but that also makes them one dimensional. GPUs are more flexible and can handle several calculations, but FPGAs can be faster at given tasks.
Many large companies are showing interest in FPGAs. Intel earlier this year completed the US$16.7 billion acquisition of Altera, an FPGA vendor. Intel will put Altera FPGAs in cars, servers, robots, drones, and other devices.
Outside of Microsoft, a Catapult server, used for machine learning, is installed at the Texas Advanced Computing Center at the University of Texas, Austin. The system is small, with 32 two-socket Intel Xeon servers, which are packed with 64GB of memory, and an Altera Stratix V D5 FPGA with its own 8GB DDR3 memory cache.
Sign up for CIO Asia eNewsletters.