Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Reality check: The state of AI, bots, and smart assistants

Galen Gruman | June 12, 2017
We’ve made a lot of progress in artificial intelligence over the last half century, but we’re nowhere near what the tech enthusiasts would have you believe

Don’t expect bots—automated software assistants that do stuff for you based on all the data they’ve monitored—to be useful for anything but the simplest tasks until problem domains like autocorrection work. They are, in fact, the same kinds of problems. 

 

Pattern identification is on the rise as machine learning

Pattern matching, even with rich context, is not enough. Because it must be predefined. That’s where pattern identification comes in, meaning that the software detects new patterns or changed patterns by monitoring your activities.

That’s not easy, because something has to define the parameters for the rules that undergird such systems. It’s easy to either try to boil the ocean and end up with an undifferentiated mess or be too narrow and end up not being useful in the real world. 

This identification effort is a big part of what machine learning is today, whether it’s to get you to click more ads or buy more products, better diagnose failures in photocopiers and aircraft engines, reroute delivery trucks based on weather and traffic, or respond to dangers while driving (the collision-avoidance technology soon to be standard in U.S. cars).

Because machine learning is so hard—especially outside highly defined, engineered domains—you should expect slow progress, where systems get better but you don’t notice it for a while.

Voice recognition is a great example—the first systems (for phone-based help systems) were horrible, but now we have Siri, Google Now, Alexa, and Cortana that are pretty good for many people for many phrases. They’re still error-prone—bad at complex phrasing and niche domains, and bad at many accents and pronunciation patterns—but usable in enough contexts where they can be helpful. Some people actually can use them as if they were a human transcriber.

But the messier the context, the harder it is for machines to learn, because their models are incomplete or are too warped by the world in which they function. Self-driving cars are a good example: A car may learn to drive based on patterns and signals from the road and other cars, but outside forces like weather, pedestrian and cyclist behaviors, double-parked cars, construction adjustments, and so on will confound much of that learning—and be hard to pick up, given their idiosyncracies and variability. Is it possible to overcome all that? Yes—the crash-avoidance technology coming into wider use is clearly a step to the self-driving future—but not at the pace the blogosphere seems to think.

 

Predictive analytics follows machine learning

For many years, IT has been sold the concept of predictive analytics, which has had other guises such as operational business intelligence. It’s a great concept, but requires pattern matching, machine learning, and insight. Insight is what lets people take the mental leap into a new area.

 

Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.