Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Artificial intelligence can go wrong – but how will we know?

Mary Branscombe | Oct. 26, 2015
You needn't worry about our robot overlords just yet, but AI can get you into a world of trouble unless we observe some best practices.

In some ways, this is nothing new. "Since the start of the industrial revolution, automated systems have been built where there is an embedded, hard-to-understand reason things are being done," Horvitz says. "There have always been embedded utility functions, embedded design decisions that have tradeoffs."

With AI, these can be more explicit. "We can have modules that represent utility functions, so there's a statement that someone has made a tradeoff about how fast a car should go or when it should slow down or when it should warn you with an alert. Here is my design decision: You can review it and question it." He envisages self-driving cars warning you about those trade-offs, or let you change them - as long as you accept liability.

Getting easier to understand systems, or ones that can explain themselves, is going to be key to reaping the benefits of AI.

Discrimination and regulation

It's naïve to expect machines to automatically make more equitable decisions. The decision-making algorithms are designed by humans, and bias can be built in. When the algorithm for a dating site matches men with only women who are shorter than them, it perpetuates opinions and expectations about relationships. With machine learning and big data, you can end up automatically repeating historical bias in the data you're learning from.  

When a CMU studyfound ad-targeting algorithms show ads about high-paying jobs to men more than to women, it might have been economics rather than assumptions; if more ad buyers target women, car companies or beauty products could out-bid recruiters. But unless the system can explain why, it looks like discrimination.

The ACLU has already raised questions about whether online ad tracking breaks the rules of the Equal Credit Opportunity Act and the Fair Housing Act. And Horvitz points out machine learning could sidestep privacy protections for medical information in the American Disability Act and the Genetic Information Non Discrimination Act that prevent it being used in decisions about employment, credit or housing, because it can make "category-jumping inferences about medical conditions from nonmedical data."

It's even more of an issue in Europe, he says. "One thread of EU law is that when it comes to automated decisions and automation regarding people, people need to be able to understand decisions and algorithms need to explain themselves. Algorithms need to be transparent." There are currently exemptions for purely automatic processing, but the forthcoming EU data privacy regulation might require businesses to disclose the logic used for that processing.

The finance industry has already had to start dealing with these issues, says Alex Gray, CTO of machine learning service SkyTree, because it's been using machine learning for years, especially for credit cards and insurance.

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.