Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Future of Life: Skype co-founder Jaan Tallinn claims AI development is different this time

Sam Shead | April 28, 2015
Elon Musk and Stephen Hawking have warned that uncontrolled development of AI could have serious consequences on humanity.

"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls," reads the letter. "Our AI systems must do what we want them to do."

Musk, the cofounder of SpaceX and Tesla, a member of the FLI's scientific advisory board alongside actor Morgan Freeman and world-renowned Cambridge University professor, Stephen Hawking, has previously said that uncontrolled development of AI could be "potentially more dangerous than nukes".

Experts at some of the world's biggest tech corporations - including IBM's Watson supercomputer team, Google, Microsoft Research and Amazon - also signed the letter.

Other signatories include the entrepreneurs behind artificial intelligence companies that Tallinn has backed, such as DeepMind and Vicarious.

Musk has pledged $10 million (£6.6 million) to help fund research into the development of AI through the FLI.

Tallinn said there has been a "good response" from the academic community and the FLI is now "sifting through" them and deciding which ones to fund.

Many technology companies are racing ahead with their own AI research in a bid to cash in on the technology's potential but Tallinn doesn't think any rules or regulations should be introduced just yet.

"I think it's too early to think about very concrete monitoring mechanisms," said Tallinn. "I think it's more important right now to build concensus in the industry and academia around what are the things that would have a chilling effect."

 

Previous Page  1  2 

Sign up for CIO Asia eNewsletters.