Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Can autonomous killer robots be stopped?

George Nott | Aug. 28, 2017
Advances in AI could make lethal weapon technology devastatingly effective.

Although many of the best minds currently working in AI and robotics have signed the open letter, far from all of them have. Unsurprisingly, the companies involved in military technology don’t often publish their breakthroughs in peer-reviewed journals, giving the scientific community little or no oversight.


But some kind of agreement is surely better than nothing?

The signatories certainly believe so.

“The key is to establish circumstances under which their use might be permitted and to develop practical legal frameworks that allocate responsibility for infringements,” says Finn.

If a ban is not possible, it’s better we have a legal framework in place sooner rather than later.

"In the past, technology has often advanced much faster than legal and cultural frameworks, leading to technology-driven situations such as mutually assured destruction during the Cold War, and the proliferation of land mines,” says James Harland, Associate Professor in Computational Logic at RMIT.

“I have seen first-hand the appalling legacy of land mines in countries such as Vietnam [where RMIT has two campuses], where hundreds of people are killed or maimed each year from mines planted over 40 years ago. I think we have a chance here to establish this kind of legal framework in advance of the technology for a change, and thus allow society to control technology rather than the other way around.”


Previous Page  1  2  3 

Sign up for CIO Asia eNewsletters.