Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How machine learning ate Microsoft

Mary Branscombe | Feb. 23, 2015
Yesterday's announcement of Azure Machine Learning offers the latest sign of Microsoft's deep machine learning expertise -- now available to developers everywhere.

The team is also working on Infer.Net, a probabilistic programming toolkit that uses machine-language descriptions of the world to handle uncertainty, instead of needing every question to have the usual yes/no answer of computers. That's what Clutter uses to triage your inbox. Researcher John Winn and his colleagues worked with the Exchange team for four years on different ideas until they found something that could "really add value and not be in some way creepy or attract the negativity you can sometimes get when you start applying machine learning to personal email."

"Then as computer vision started to become more influenced by machine learning, [we attracted] a large number of very significant luminaries in that field who had one foot in machine learning and one in vision, and people like Andrew Blake became very relevant," Lee explains. (Blake, who now runs the Cambridge lab, pioneered key probabilistic computer vision algorithms at Edinburgh and Oxford University.)

A few years later, when AT&T closed down Bell Labs, many of the researchers joined Microsoft. "People who were really thinking about neural networks and more statistical methods started to arrive on the scene," says Lee. "That was timed with the emergence of the relevance of big data; that whole wave has been tremendously influential, not only inside Microsoft but in the whole industry."

Then in 2009, shortly before Lee himself joined Microsoft, a project that he jokes he might easily have rejected as "an unwise attempt to use layered neural networks for speech processing" helped take machine learning out of the lab and into mainstream computing.

"I would have said it was completely ridiculous, and I would have been backed by all the top researchers," Lee admits. Instead, that work became the foundation for the multilayered "deep" neural networks that have transformed voice and image recognition across the industry.

Diving deeper
Voice recognition used to mean training your computer to learn your voice, or sticking to a few simple commands; now it means you can buy a new phone and start talking to it -- and Windows 10 will bring that to your PC.

Image recognition has gone from spotting when there's a face in a photograph to coping with everything from text to traffic signals. The ImageNet benchmark tests identifying photos of a thousand objects, like recognizing not only pictures of 150 different dogs but also their breeds. "You have to distinguish Pembroke Welsh corgis and Cardigan Welsh corgis, one of which has a longer tail," explains John Platt of MSR.

This month, a team of Microsoft researchers in the Beijing lab announced that their deep learning system was the first to beat untrained humans on the benchmark (narrowly beating Google to the achievement).

 

Previous Page  1  2  3  4  5  6  7  Next Page 

Sign up for CIO Asia eNewsletters.