Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

AI researchers say Elon Musk's fears 'not completely crazy'

Sharon Gaudin | Oct. 30, 2014
Artificial intelligence researchers have own worries about intelligent systems.

Musk told a CNN.com reporter that he made the investment "to keep an eye" on AI researchers.

For Sonia Chernova, director of the Robot Autonomy and Interactive Learning lab in the Robotics Engineering Program at Worcester Polytechnic Institute, it's important to delineate between different levels of artificial intelligence.

"There is a concern with certain systems, but it's important to understand that the average person doesn't understand how prevalent AI is," Chernova said.

She noted that AI research is used in email to filter out spam. Google uses it for its Maps service, and apps that make movie and restaurant recommendations also use it.

"There's really no risk there," Chernova said. "I think [Musk's] comments were very broad and I really don't agree there. His definition of AI is a little more than what we really have working. AI has been around since the 1950s. We're now getting to the point where we can do image processing pretty well, but we're so far away from making anything that can reason."

She said researchers might be as much as 100 years from building an intelligent system.

Other researchers disagree on how far they might be from creating a self-aware, intelligent machine. At the earliest, it might be 20 years away, or 50 or, even 100 years away.

The one point they agree on is that it's not happening tomorrow.

However, that doesn't mean we shouldn't be thinking about how to handle the creation of sentient systems now, said Yaser Abu-Mostafa , professor of electrical engineering and computer science at the California Institute of Technology.

Scientists today need to focus on creating systems that humans will always be able to control.

"Having a machine that is evil and takes over... that cannot possibly happen without us allowing it," said Abu-Mostafa. "There are safeguards... If you go through the scenario of a machine that wants to take over or destroy the world, it's a nice science-fiction scenario, as long as we don't allow a system to control itself."

He added that some concern about AI is justified.

"Take nuclear research. Clearly it's very dangerous and can lead to great harm but the danger is in the use of the results not in the research itself," Abu-Mostafa said. "You can't say nuclear research is bad so you shouldn't do it. The idea is to do the research and understand the facts and then have controls in place so the research is not abused. If we don't do the research, others will do the research."

The nuclear research program offers another lesson, according to Stuart Russell, a professor of electrical engineering and computer science at the University of California Berkeley.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.