Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Overcoming our fears and avoiding robot overlords

Sharon Gaudin | Sept. 21, 2015
Computer scientists say fears shouldn't stunt research, but talk should go on now to make advances in artificial intelligence safer

As useful as this sounds, some will wonder how humans will stay in control of such intelligent and potentially powerful machines. How will humans maintain authority and stay safe?

"The fear is that we will lose control of A.I. systems," said Tom Dietterich, a professor and director of Intelligent Systems at Oregon State University. "What if they have a bug and go around causing damage to the economy or people, and they have no off switch? We need to be able to maintain control over these systems. We need to build mathematical theories to ensure we can maintain control and stay on the safe side of the boundaries."

Can an A.I. system be so tightly controlled that its good behavior can be guaranteed? Probably not.

One thing that's being worked on now is how to verify, validate or give some sort of safety guarantee on A.I. software, Dietterich said.

Researchers need to focus on how to fend off cyberattacks on A.I. systems, and how to set up alerts to warn the network - both human and digital - when an attack is being launched, he said.

Dietterich also warned that A.I. systems should never be built that are fully autonomous. Humans don't want to be in a position where machines are fully in control.

Darrell echoed that, saying researchers need to build redundant systems that ultimately leave humans in control.

"Systems of people and machines will still have to oversee what's happening," Darrell said. "Just as you want to protect from a rogue set of hackers being able to suddenly take over every car in the world and drive them into a ditch, you want to have barriers [for A.I. systems] in place. You don't want one single point of failure. You need checks and balances."

USC's Gil added that figuring out how to deal with increasingly intelligent systems will move beyond having only engineers and programmers involved in developing them. Lawyers will need to get involved, as well.

"When you start to have machines that can make decisions and are using complex, intelligent capabilities, we have to think about accountability and a legal framework for that," she said. "We don't have anything like that right now... We are technologists. We are not legal scholars. Those are two sides that we need to work on and explore."

Since artificial intelligence is a technology that magnifies the good and the bad, there will be a lot to prepare for, Dietterich said, and it will take a lot of different minds to stay ahead of the technology's growth.

"Smart software is still software," he said. "It will contain bugs and it will have cyberattacks. When we build software using A.I. techniques, we have additional challenges. How can we make imperfect autonomous systems safe?"

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.