"I am not worried, not only because we are probably decades away from a superior level of machine intelligence, but also because I believe we can control it when we get there. Using the nuclear technology analogy, the fact that we now have the physical ability to destroy the entire world in minutes does not mean that it will just happen. Humans can and do put the safeguards in place to prevent that."
Some scientists do have concerns about artificial intelligence advancing beyond human control, but they admit the technology is 50 to 100 years away. That leaves plenty of time to prepare for any threatening advances in AI technology.
"I actually do think this is a valid concern and it's really an interesting one," said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University, in a previous interview. "It's a remote, far future danger but sometime we're going to have to think about it. If we're at all close to building these super-intelligent, powerful machines, we should absolutely stop and figure out what we're doing."
Stuart Russell, a professor of electrical engineering and computer science at the University of California Berkeley, said he sees some future danger to artificial intelligence, and that's why he's making efforts now, organizing talks and workshops, to educate scientists about it.
Now is the time to start thinking about the issue, before scientists are capable of building the machines, he said.
Sign up for CIO Asia eNewsletters.