Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The AI ecosystem

Kris Hammond | May 12, 2015
"As soon as it works, no one calls it AI anymore." - John McCarthy

Virtual face of artificial intelligence circuits and binary data

Interest and press around artificial intelligence (AI) comes and goes, but the reality is that we have had AI systems with us for quite some time. Because many of these systems are narrowly focused (and actually work), often times they are not thought of as being AI.

For example, when Netflix or Amazon suggests movies or books for you, they are actually doing something quite human. They look at what you have liked in the past (evidenced by what you have viewed or purchased), find people who have similar profiles, and then suggest things that they liked that you haven't seen yet. This, combined with knowing what you last viewed and the things that are similar to it, enable them to make recommendations for you. This is not unlike what you might do when you have two friends with a lot in common and use the likes and dislikes of one of them to figure out a gift for the other.

Whether these recommendations are good or bad is not the point. They are aimed at mirroring the very human ability to build profiles, figure out similarities, and then make predictions about one person's likes and dislikes based on those of someone who is similar to them. But because they are narrowly focused, we tend to forget that what they are doing is something that requires intelligence, and that occasionally they may be able to do it better than we do ourselves.

If we want to better understand where AI is today and the systems that are in use now, it is useful to look at the different components of AI and the human reasoning that it seeks to emulate.

So what do we do that makes us smart?  

Sensing, reasoning & communicating
Generally, we can break intelligence or cognition into three main categories: sensing, reasoning and communicating. Within these macro areas, we can make more fine-grained distinctions related to speech and image recognition, different flavors of reasoning (e.g., logic versus evidence-based), and the generation of language to facilitate communication. In other words, cognition breaks down to taking stuff in, thinking about it and then telling someone what you have concluded.

The research in AI tends to parallel these different aspects of human reasoning separately. However, most of the deployed systems that we encounter, particularly the consumer-oriented products, make use of all three of these layers.  

Sensing
For example, the mobile assistants that we see today - Siri, Cortana and Google Now - all make use of each of these three layers. They use speech recognition to first identify the words that you have spoken to the system, and then they capture your voice and use the resulting waveform to recognize a set of words. Each of these systems uses it own version of voice recognitionwith Apple making use of a product built by Nuance and both Microsoft and Google rolling out their own. It is important to understand that this does not mean that they comprehend what those words mean at this point. They simply have access to the words you have said in the same way they would if you had typed them into your phone.

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.