Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Are we safe from self-aware robots?

Evan Schuman | Aug. 14, 2015
A breakthrough in A.I. has been reported that suddenly makes all of those apocalyptic predictions about killer robots seem less crazy.

I suppose that was intended to be comforting to its human readers, suggesting that consciousness will always keep humans one big step beyond computers. But another way to look at it is that these systems will eventually have the ability to think any thoughts humans can, but without our moral compass. So the machines, confronted by a starving population and an agricultural system that is maxed out, might conclude that a sharp population reduction is the solution -- and that the nuclear power plants within its control offer a way to achieve it.

You can forget Isaac Asimov's Three Laws of Robotics. The United Nations has already attempted to set rules for battlefield robots that can decide on their own when it's a good idea to kill people.

There is a subtle line that shouldn't be crossed with artificial intelligence. Making Siri smarter so that she understands questions better and delivers more germane answers is welcome. But what about letting her decide to delete apps that are never used or add some that your history suggests you'd like? What if she sees from your calendar that you're on a critical deadline this afternoon and decides to prevent you from launching distracting games when you should be working?

Engineers are not the best at setting limits. They are much better suited -- both in terms of temperament and intellectual curiosity -- to seeing how far they can push limits. That's admirable, except when its results move from C3PO to HAL 9000 to Star Trek:TNG's Lore. When superior engineering truly engineers something superior -- superior to the engineers -- can disasters imagined in science fiction become science fact?


Previous Page  1  2 

Sign up for CIO Asia eNewsletters.