Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Artificial intelligence can go wrong – but how will we know?

Mary Branscombe | Oct. 26, 2015
You needn't worry about our robot overlords just yet, but AI can get you into a world of trouble unless we observe some best practices.

That might mean compromises between how transparent a model is and how powerful it is. "It's not always the case that the more powerful methods are less transparent but we do see those trade-offs," says Horvitz. "If you push very hard to get transparency, you will typically weaken the system."

As well as the option of making systems more explainable, it's also possible to use one machine learning system to explain another. That's the basis of a system Horvitz worked on called Ask MSR. "When it generated an answer it could say here's the probability it's correct," he says - and it's a trick Watson uses too. "At a metalevel, you're doing machine learning about a complex process you can't see directly to characterize how well it's going to do."

Ryan Caplan, CEO of ColdLight, which builds AI-based predictive analytics, suggests systems may ask how much they will need to explain before they give you an answer. "Put the human being in control by asking 'do you need to legally explain the model or do you need the best result?' Sometimes it's more important to have accuracy over explainability. If I'm setting the temperature in different areas of an airport, maybe I don't need to explain how I decide. But in many industries, like finance, where a human has to be able to explain a decision, that system may have to be curtailed to certain algorithms."

Accessibility not fragility

Hector Yee, who worked on AI projects at Google before moving to AirBnB, insists that "machine learning should involve humans in the loop somewhere." When he started work on AirBnB's predictive systems he asked colleagues if they wanted a simple model they could understand or a stronger model they wouldn't. "We made the trade-off early on to go human interpretable models," he says, because it makes dealing with bugs and outliers in the data far easier.

"Even the most perfect neural net doesn't know what it doesn't know. We have a feedback loop between humans and machine learning; we can look at what the machine has done and what we need to do to add features that improve the model. We know what data we have available. We can make an informed decision what to do next. When you do that, suddenly your weaker model becomes stronger."

Patrice Simard of Microsoft Research is convinced that applies beyond today's PhD-level machine learning experts. His goal is "to democratise machine learning and make it so easy to use my mother could build a classifier with no prior knowledge of machine learning."

Given the limited number of machine learning experts, he says the best way to improve machine learning systems is to make them easier to develop. "You can build a super smart system that understands everything or you can break it down into a lot of multiple tasks and if each of these tasks can be done in an hour by a person of normal expertise, we can talk about scaling the numbers of contributors instead of making one particular algorithm smarter."

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.