Russell, who focuses his research on robotics and artificial intelligence, said, that like other fields, AI researchers have to take risk into account because there is risk involved maybe not today but likely some day.
"The underlying point [Musk] is making is something that dozens of people have made since the 1960s," Russell said. "If you build machines that are more intelligent than people, you might not be able to control them. Sci-fi says they might develop some evil intent or they might develop a consciousness. I don't see that being an issue, but there are things we don't have a good handle on."
For instance, Russell noted that as machines become more intelligent and more capable, they simultaneously need to understand human values so when they're acting on humans' behalf, they don't harm people.
The Berkeley scientist wants to make sure that AI researchers consider this as they move forward. He's communicating with students about it, organizing workshops and giving talks.
"We have to start thinking about the problem now," Russell said. "When you think nuclear fusion research, the first thing you think of is containment. You need to get energy out without creating a hydrogen bomb. The same would be true for AI. If we don't know how to control AI... it would be like making a hydrogen bomb. They would be much more dangerous than they are useful."
To create artificial intelligence safely, Russell said researchers need to begin having the necessary discussions now.
"If we can't do it safely, then we shouldn't do it," he said. "We can do it safely, yes. These are technical, mathematical problems and they can be solved but right now we don't have that solution."
Sign up for CIO Asia eNewsletters.