Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Microsoft to speed up Bing with FPGAs next year

Agam Shah | Aug. 13, 2014
It started off as an experiment, but Microsoft now wants to speed up and return more accurate Bing search results with the help of reconfigurable chips called FPGAs (field-programmable gate arrays) in data centers.

One service could allow filtering and ranking of results. A separate service within the FPGAs could provide scores and measure relevancy of queries to results. The scores are then sorted and returned as results. It also allows more math and image recognition services to be applied to the results.

But as the number of FPGAs grows, it could be a challenge reprogramming all of them.

Microsoft's data center administrators hated the idea of FPGAs in servers. There were many reasons for the FPGA model not to fit well in data centers.

"First of all, it's a single point of failure and it really complicates the rack design, the thermals and maintainability. You have an FPGA box that's spitting out a ton of heat, all the other servers are at a different level and it becomes really hard to control," Putnam said.

It's also a security nightmare, as the FPGAs run different operating system images and software than conventional servers. FPGAs on the main network would also create latency problems, limiting the elasticity of a data center, Putnam said.

In 2011, Microsoft designed prototype boards with six Virtex-6 FPGAs and Intel Xeon CPUs, The plan was to put these boards into 1U, 2U and 4U rack-mount servers and slip them into racks across data centers. But that didn't work out well.

So when reconsidering implementing FPGAs in data centers, Microsoft set up two design requirements. The FPGAs would go into dedicated servers that could be used for tasks outside processing Bing search. Also, the server couldn't increase the hardware failure rates or require network modifications.

Limitations were also set for Microsoft's server design team: The FPGAs had to be less than 30 percent of the cost of a server, and couldn't draw more than 10 percent of the overall power drawn by the system.

And that led to the birth of the Microsoft Open Compute Server, a 1U-tall server with a PCI-Express slot that could squeeze in a single FPGA. The compact server had two eight-core Xeon CPUs, 64GB of DRAM, two solid-state drives, four hard drives total and a 10-gigabit Ethernet slot. The servers were used in the experiment earlier this year.

"Your only slot for adding accelerators was tiny, about 10 centimeters by 10 centimeters," Putnam said.

The FPGA accelerator board was an Altera Stratix V G5 D5 card, which had an 8GB DDR3, 32MB flash and eight-lane Mini-SAS connectors. It was plugged into a PCI-Express slot. The FPGA on each server had a heatsink of its own.

The CPUs and other components generated a lot of heat, and the FPGA were getting baked in a 68 degrees Celsius (154.4 degrees Fahrenheit) inlet. "That's the air we're supposed to cool the [FPGA] with," Putnam remarked.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.