Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

AI gets its groove back

Lamont Wood | April 15, 2014
After decades of start-and-stop, artificial intelligence is being advanced by major computing firms from Facebook and Google to IBM.

There was a burst of enthusiasm in the late 1950s and early 1960s that fizzled due to a lack of computing power. Michael Covington, consultant

Today, thanks to the availability of vast amounts of online data and inexpensive computational power, especially in the cloud, "we are not hitting the wall anymore," Hammond says. "AI has reached an inflection point. We now see it emerging from a substrate of research, data analytics and machine learning, all enabled by our ability to deal with large masses of data."

Going forward, "The idea that AI is going to stall again is probably dead," says Luke Muehlhauser, executive director of the Machine Intelligence Research Institute (MIRI) in Berkeley, Calif. "AI is now ubiquitous, a tool we use every time we ask Siri a question or use a GPS device for driving directions."

Deep learning

Beyond today's big data and massive computational resources, sources cite a third factor pushing AI past an inflection point: improved algorithms, especially the widespread adoption of a decade-old algorithm called "deep learning." Yann LeCun, director of Facebook's AI Group, describes it as a way to more fully automate machine learning by using multiple layers of analysis that can compare their results with other layers.

He explains that previously, anyone designing a machine-learning system had to submit data to it, but not before they hand-crafted software to identify sought-after features in the data and also hand-crafted software to classify the identified features. With deep learning, both of these manual processes are replaced with trainable machine-learning systems.

"The entire system from end to end is now multiple layers that are all trainable," LeCun says.

(LeCun attributes the development of deep learning to a team led by Geoff Hinton, a professor at the University of Toronto who now works part-time for Google; LeCun was, in fact, part of Hinton's deep learning development team. Hinton did not respond to interview requests.)

Even so, "deep learning can only take us so far," counters Gary Marcus, a professor at New York University. "Despite its name it's rather superficial -- it can pick up statistical tendencies and is particularly good for categorization problems, but it's not good at natural language understanding. There needs to be other advances as well so that machines can really understand what we are talking about."

There needs to be other advances ... so that machines can really understand what we are talking about. Gary Marcus, Professor, New York University

He hopes the field will revisit ideas that were abandoned in the 1960s since, with modern computer power, they now might produce results, such as a machine that would be as good as a four-year-old child at learning language.

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.