Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Artificial intelligence can go wrong – but how will we know?

Mary Branscombe | Oct. 26, 2015
You needn't worry about our robot overlords just yet, but AI can get you into a world of trouble unless we observe some best practices.

"They've got to the point where it affects human lives, for example by denying someone credit. There are regulations that force credit card companies to explain to the credit applicant why they were denied. So, by law, machine learning has to be explainable to the everyman. The regulation only exists for the financial industry but our prediction is you will see that everywhere, as machine learning inevitably and quickly makes its way into every critical problem of human society."

Explanations are obviously critical in medicine. IBM Watson CTO Rob High points out "It's very important we be transparent about the rationale of our reasoning. When we provide answers to a question, we provide supporting evidence for a treatment suggestion and it's very important for the human who receives those answers to be able to challenge the system to reveal why it believed in the treatment choices it suggested."

But he believes it's important to show the original data the system learned from, rather than the specific model it used to make the decision. "The average human being is not well-equipped to understand the nuance of why different algorithms are more or less relevant," he says, "but they can test them quickly by what they produce. We have to explain in a form the person who is an expert in that field will recognise, not show that it's justified by the mathematics in the system."

Medical experts often won't accept system that don't make sense to them. Horvitz found this with a system that advised pathologists on what tests to run. The system could be more efficient if it wasn't constrained to the hierarchies we used to categorise disease but the users disliked it until it was changed to work in a more explicable way. "It wouldn't be as powerful, it would ask more questions and do more tests but the doctor would say 'I get it, I can understand this and it can really explain what it's doing."

Self-driving cars will also bring more regulation to AI, says Gray. "Today, a bunch of that [self-driving system] is neural networks and it's not explainable. Eventually, when a car hits somebody and there's an investigation, that issue will come up. The same will be true of everywhere that's high value, which affects people or their businesses; there's going to have to be that kind of explainability."

Explaining yourself

CIO June/July 2016 issueCIO June/July 2016 Digital Magazine: These CIOs mean business

Cover story: this year's CIO 100 honorees are serious about winning customers and driving revenue.

READ NOW

In future, Gray says machine learning systems may need to show how the data was prepared and why a particular machine learning model was chosen. "You'll have to explain the performance of the model and its predictive accuracy in specific situations."

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.