Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The hidden risk of blind trust in AI’s ‘black box’

Clint Boulton | July 7, 2017
Companies intent on weaving AI more tightly into the fabric of their businesses are seeking to better explain how algorithms make their decisions, especially where risk and regulations are involved.

Artificial intelligence is gaining traction in enterprises, with many large organizations exploring algorithms to automate business processes or building bots to field customer inquiries. But while some CIOs see self-learning software as a boon for achieving greater efficiencies, others are leery about entrusting too much of their operations to AI because it remains difficult to ascertain how the algorithms arrive at their conclusions.

CIOs in regulated industries in particular, such as financial services and any sector exploring autonomous vehicles, are grappling with this so-called "black box problem." If a self-driving rig suddenly swerves off of the road during testing, the engineers had darn well better figure out how and why. Similarly, Finservs looking to use it to vet clients for credit risks need to proceed with caution to avoid introducing biases into their qualification scoring. Because of these and similar issues around risk, companies are increasingly seeking ways to vet, or even explain, predictions rendered by their AI tools.

Most software developed today that automates business processes is codified with programmable logic. If it works as intended it does things that its programmers told it to do. But in this second wave of automation, software capable of teaching itself is king. Without a clear understanding of how this software detects patterns and observes outcomes, companies with risk and regulations on the line are left to wonder how strongly they can trust the machines.


Big data fueling AI, challenges

AI spans a wide range of cognitive technologies to enable situational reasoning, planning and learning, aping the natural intelligence that humans and other animal species possess. In has long lived in labs as a tantalizing possibility, but the growth in computing power, the increasing sophistication of algorithms and AI models and the billions of gigabytes of data spewing daily from connected devices has unleashed a Cambrian explosion in self-directing technologies. Self-driving cars can navigate tricky terrain while bots can mimic human speech and businesses have stepped up their investments. Corporate adoption of cognitive systems and AI will drive worldwide revenues from nearly $8 billion in 2016 to more than $47 billion in 2020, according to IDC.

There's no question the technologies and their aptitude to learn are growing rapidly but so is the complexity. At the heart of machine learning and deep learning, two subsets of AI that most businesses employ, are neural networks, interconnected nodes modeled after the network of neurons in the human brain. As these technologies grow more powerful, the sheer volume of connections firing within neural networks of self-learning systems are nearly impossible to track, let alone parse. It begs the question: Can we trust an algorithm to tell us whether an erstwhile homeowner can repay a 30-year mortgage without running afoul of fair lending rules?


1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.