Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Twenty years after Deep Blue, what can AI do for us?

Peter Sayer | May 12, 2017
IBM built Deep Blue to win at chess -- but since then has taken a collaborative, rather than competitive, approach to artificial intelligence.

IDGNS: Is one advantage of this augmented intelligence system, where it's ultimately the physician making the decision, that it makes it clear for legal purposes where the responsibility lies?

MC: Sometimes the problems are not life or death. If you decide to install a system that recommends a movie or a book to somebody, if you make a mistake it's not the end of the world, whereas some kinds of decisions are really important. We have to have, for many decades to come, I would suggest, humans having the final word on those decisions. But the more informed they are about reasonable alternatives and the advantages and disadvantages of those alternatives, I think the better off everybody will be.

IDGNS: You found the bug in Deep Blue, but the latest generation of AIs seem a lot more inscrutable and harder to audit than Deep Blue: You can't look back over the search tree of moves they have considered and figure out whether they're giving us the right answer, particularly to these real-world problems you were talking about.

MC: It's perhaps one of the most critical problems in AI today. We've seen some of the successes based on deep learning, large neural networks that are trained on problems, and they're incredibly useful, but they are large black boxes. They don't explain themselves in any useful way at the moment.

There are research projects trying to change that, but, for example, even Deep Blue, which was not based on a neural network but on a large search through billions of possibilities, had really no useful way of describing exactly why it made the moves that it did. You would have to do a lot of work if you were given a Deep Blue recommendation to figure out why it had recommended it. And I think that's true for modern AI systems too. In the group that I work with, one of the key research problems is interpretability of AI, allowing them to explain themselves so that the augmented intelligence systems that I talked about can be more effective if the system can explain its reasoning to the human decision-maker.

IDGNS: What approaches are you taking to that?

There are approaches that use machine learning to help machine learning describe itself. You have one system that makes decisions or gives you a prediction and then, maybe at the cost of a lot of work, for each of those decisions you figure out what the reasoning is, a human-understandable reason why that's a good decision or why that decision was made. Then you can build a system that, given a bunch of examples of decisions and explanations, can learn to come up with explanations that are useful. That's one approach.

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.