Of course, even an autonomous robot gets its commands from somewhere. It can only operate because it has commands known as programming. But if the robot's operating system goes all Windows on us ("Windows" being an easily understood euphemism for "FUBAR"), can the programmer who inadvertently (we hope) keyed in the error-ridden code be charged with murder or a war crime? Can that programmer be charged with disobeying an order, in that his or her code did not do what the programmer was told to make it do? Can that programmer's supervisor — who was supposed to check all code — be so charged, too?
There is some comfort in the fact that the UN truly is approaching this entire idea as a debate. It has invited two robotics experts, one of whom is extremely skeptical about autonomous killer robots (in fact, he's co-founder of the Campaign Against Killer Robots and chairman of the International Committee for Robot Arms Control — and I am not making any of this up), the other of whom is more receptive to the idea but doesn't seem to be a backer with Dr. Strangelove intensity. Personally, my position is more extreme than that held by either of these two gentlemen. As someone who makes a living writing about IT efforts that deliver unexpected and unintended results, the idea of a truly autonomous killer robot is just plain terrifying.
Sign up for CIO Asia eNewsletters.