Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Artificial intelligence can go wrong – but how will we know?

Mary Branscombe | Oct. 26, 2015
You needn't worry about our robot overlords just yet, but AI can get you into a world of trouble unless we observe some best practices.

Every time we hear that "artificial intelligence" was behind something - from creating images to inventing recipes to writing a description of a photo - we thought was uniquely human, you'll see someone worrying about the dangers of AI either making humans redundant, or deciding to do away with us altogether. But the real danger isn't a true artificial intelligence that's a threat to humanity - because despite all our advances, it isn't likely we'll create that.

What we need to worry about is creating badly designed AI and relying on it without question, so we end up trusting "smart" computer systems we don't understand, and haven't built to be accountable or even to explain themselves.

Self-taught expert systems

Most of the smart systems you read about use machine learning. It's just one area of artificial intelligence - but it's what you hear about most, because it's where we're making a lot of progress. That's thanks to an Internet full of information with metadata; services like Mechanical Turk where you can cheaply employ people to add more metadata and check your results; hardware that's really good at dealing with lots of chunks of data at high speed (your graphics card); cloud computing and storage; and a lot of smart people who've noticed there is money to be made taking their research out of the university and into the marketplace.

Machine learning is ideal for finding patterns and using those to either recognize, categorize or predict things. It's already powering shopping recommendations, financial fraud analysis, predictive analytics, voice recognition and machine translation, weather forecasting and at least parts of dozens of other services you already use.

Outside the lab, machine learning systems don't teach themselves; there are human designers, telling them what to learn. And despite the impressive results from research projects, machine learning is still just one piece of how computer systems are put together. But it's far more of a black box than most algorithms, even to developers -- especially when you're using convolutional neural networks, commonly known as "deep learning" systems.

"Deep learning produces rich, multi-layered representations that their developers may not clearly understand," says Microsoft Distinguished Scientist Eric Horvitz, who is sponsoring a 100-year study at Stanford of how AI will influence people and society, looking at why we aren't already getting more benefits from AI, as well as concerns AI may be difficult to control.

The power of deep learning produces "inscrutable" systems that can't explain why they made decisions, either to the user working with the system or someone auditing the decision later. It's also hard to know how to improve them. "Backing up from a poor result to 'what's causing the problem, where do I put my effort, where do I make my system better, what really failed, how do I do blame assignments,' is not a trivial problem," Horvitz explains; one of his many projects at MSR is looking at this.

 

1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.