"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls," reads the letter. "Our AI systems must do what we want them to do."
Musk, the cofounder of SpaceX and Tesla, a member of the FLI's scientific advisory board alongside actor Morgan Freeman and world-renowned Cambridge University professor, Stephen Hawking, has previously said that uncontrolled development of AI could be "potentially more dangerous than nukes".
Experts at some of the world's biggest tech corporations - including IBM's Watson supercomputer team, Google, Microsoft Research and Amazon - also signed the letter.
Other signatories include the entrepreneurs behind artificial intelligence companies that Tallinn has backed, such as DeepMind and Vicarious.
Musk has pledged $10 million (£6.6 million) to help fund research into the development of AI through the FLI.
Tallinn said there has been a "good response" from the academic community and the FLI is now "sifting through" them and deciding which ones to fund.
Many technology companies are racing ahead with their own AI research in a bid to cash in on the technology's potential but Tallinn doesn't think any rules or regulations should be introduced just yet.
"I think it's too early to think about very concrete monitoring mechanisms," said Tallinn. "I think it's more important right now to build concensus in the industry and academia around what are the things that would have a chilling effect."
Sign up for CIO Asia eNewsletters.