Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

AI vs machine learning vs deep learning: What is deep learning?

Tom Macaulay | July 26, 2017
Artificial intelligence is going mainstream, and deep learning is one of its most exciting subsets.

Deep learning can be used to predict earthquakes or steer self-driving cars. It can colourise black and white videos, translate text with a phone camera, mimic human voices, compose music, write computer code, and beat humans at board games, as Google DeepMind famously did last year against the South Korean 'Go' champion Lee Sedol.

It also has countless potential applications for businesses, from security systems to sentiment analysis, to optimising manufacturing. Deep learning is particularly proficient at understanding images and audio, and could automate many common professional tasks such analysing x-rays or scanning legal documents.

 

History of deep learning

"Deep learning is not a new idea," says Sean Owen, the director of data science at software company Cloudera. "It's the rebirth of another idea that people have finally gotten to work well."

The origins of deep learning go back to the 1950s, and an early attempt to mimic the interconnectivity of neurons in biological brains known as the "perceptron". The machine learning algorithm was developed by American psychologist Frank Rosenblatt in 1957 with funding from the United States Office of Naval Research.

His invention was dramatically described by the New York Times as "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."

The complexities of the technology meant it soon fell out of favour, but it reemerged in 1986 with the publication of a paper entitled "learning representations by back-propagating errors" that offered a more efficient way for neural networks to learn.

In the nineties, the spotlight shifted to a new class of machine learning called 'support vector machine', that offered high-performing algorithms that were comparatively straightforward.

Only in the last decade have researchers truly learned to leverage the vast computation available in the cloud required to make deep learning work at a scale.

In 2011, deep learning pioneer Andrew Ng founded Google Brain. The Stanford University professor had already helped develop autonomous helicopters and multi-purpose household robots, but it was Google's mammoth neural networks research project that made him an icon of AI.

His creation earned a New York Times headline when a cluster of 16,000 computer processors simulating the human brain scanned 10 million images found in YouTube videos to recognise cats within, and also independently discover what it was that made something a 'cat'.

The neural networks developed at Google Brain were used again later that year, albeit to far less fanfare, in the speech recognition software used in Android phones.

Google Brain brought mainstream attention to deep learning and proved that the human brain could provide a model for machine learning at a time when many engineers favoured simple automation masquerading as intelligence.

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.