Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The hidden risk of blind trust in AI’s ‘black box’

Clint Boulton | July 7, 2017
Companies intent on weaving AI more tightly into the fabric of their businesses are seeking to better explain how algorithms make their decisions, especially where risk and regulations are involved.

Bruce Lee, senior vice president and head of operations and technology, Fannie Mae 
Bruce Lee, senior vice president and head of operations and technology, Fannie Mae. Credit: Fannie Mae

It's a question that has mortgage companies such as Fannie Mae searching for AI whose decisions they can explain more precisely to regulators, says Bruce Lee, the company's head of operations and technology. It might be logical to infer that a homeowner who manages their electricity bills using products such as the Nest thermostat might have more free cash flow to repay their mortgage. But enabling an AI to incorporate such a qualification is problematic in the eyes of regulators, Lee says.

"Could you start offering people with Nest better mortgage rates before you start getting into fair lending issues about how you’re biasing the sample set?" Lee tells "AI in things like credit decisions, which might seem like an obvious area, is actually fraught with a lot of regulatory hurdles to clear. So a lot of what we do has to be thoroughly back-tested to make sure that we’re not introducing bias that’s inappropriate and that it is a net benefit to the housing infrastructure. The AI has to be particularly explainable."

"Explainable AI," as the phrase states, is essentially AI whose decision-making, conclusions and predictions can be qualified in a reasonable way. Lee pointed to software from ThoughtSpot that details broad explanations such as kind of charting and analysis used to represent data right down to how specific words put into queries may inform results. Such capabilities are a requirement for AI in financial services, Lee says. "People need to explain it in the same way that people need to explain how you train people to avoid racial bias and decision-making and how to avoid other bias in human systems," Lee says.


Much ado about explainable AI

Academics are torn over the need for explainable AI. MIT principal research scientist Andrew McAfee, who just co-authored a book on how machine learning systems are driving the new wave of automation, isn't among them.

McAfee, speaking on an AI panel at the MIT CIO Sloan Symposium in May, answered a question about the inability to qualify AI's conclusions, thusly: "A lot of people are freaking out about that but I push back on that because human beings will very quickly offer you an explanation for why they made a decision or prediction that they did and that explanation is usually wrong," McAfee says. "We do not have access to our own knowledge so I’m a little bit less worried about the black box of the computation than other people." He said that putting up regulatory roadblocks to explain AI could "retard progress" in the market economy.


Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.