Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The hidden risk of blind trust in AI’s ‘black box’

Clint Boulton | July 7, 2017
Companies intent on weaving AI more tightly into the fabric of their businesses are seeking to better explain how algorithms make their decisions, especially where risk and regulations are involved.

Scott Blandford, Chief Digital Office, TIAA 
Scott Blandford, Chief Digital Office, TIAA. Credit: TIAA

However, MIT Sloan professor Erik Brynjolfsson, McAfee's co-author on the new book Machine, Platform, Crowd: Harnessing Our Digital Future, acknowledged that it makes it harder for humans and machines to work together if the machine can’t explain how it arrived at its reasoning. "There’s still lots of areas where we want to have that kind of leverage," Brynjolfsson said. "But in a machine that's making billions of connections it’s very hard to say this particular weighted sum drove the decision."

Other IT executives, speaking on the sideline of the MIT event, expressed caution about implementing AI technologies though they acknowledged AI’s importance to their businesses.

Scott Blandford, chief digital officer of TIAA, said companies have to worry about AI's black box problem because if "you're making decisions that impact peoples' lives you'd better make sure that everything is 100 percent." He said that while TIAA could use AI to enhance an analytics system it has built to monitor how its digital business operates he isn't ready to travel that road without further testing and validation. "We're sticking with things for now that are provable," Blandford says.

Jim Fowler, CIO, General Electric 
Jim Fowler, CIO, General Electric. Credit: General Electric

General Electric CIO Jim Fowler says explaining AI depends largely on the context in which the technology is being used. For example, self-learning software that helps process accounts receivables more efficiently may not require explanation but GE would need to fully understand how a better algorithm for firing a jet engine works before implementing it.

"You have to have the context of the purpose of how AI is being used and that's going to set how much you care about how to explain it and understand it," Fowler says. "There's a lot of processes that have to be testable, that you've got to be able to show a regulator and to show yourself that you've tested it and proven it. If you've got an algorithm that is constantly changing on something that is related to life and limb it is going to be harder to just trust the black box."


Previous Page  1  2  3 

Sign up for CIO Asia eNewsletters.