Credit: Jimngu, CC BY-SA 3.0, via Wikimedia Commons
As technology advances, we poor humans are getting desperate for sources of self-esteem. Everyone knows computers can play chess and Jeopardy! better than we can. They sort thousands of documents for relevance in legal cases faster, cheaper and better than lawyers do. They assemble electronic products in factories faster, cheaper and better than people do. They can drive cars better than human drivers can.
So we grasp at evidence of our continued superiority over the machine. Recent articles, for example, show that computers are still pretty poor at humor, and they make some obvious blunders as psychotherapists. Yet any comfort we derive from these facts is unfounded, because it overlooks a crucial reality: The technology is getting roughly twice as powerful every two years, while we humans are not.
Ignoring that reality leads us astray as we confront one of the center-stage issues of our time: How will humans create value and earn a rising standard of living when technology keeps doing more work better than we do? Specifically, we seek an answer in the wrong way by asking the wrong next question: What is it that technology inherently cannot do? While it seems like common sense that the skills computers can't acquire will be valuable, the lesson of history is that it's dangerous to claim there are any skills that computers cannot eventually acquire.
The trail of embarrassing predictions goes way back: Early researchers in computer translation of languages were highly pessimistic that the field could ever progress beyond its nearly useless state as of the mid-1960s; now Google translates the written word for free, and Skype translates spoken language in real time, for free. Hubert Dreyfus of MIT, in a 1972 book called What Computers Can't Do, saw little hope that computers could make significant further progress in playing chess beyond the mediocre level than achieved; but a computer beat the world champion, Garry Kasparov, in 1997. Two excellent economists explained in 2004 how driving a vehicle requires such complex split-second judgments based on such a wide range of inputs that it would be extremely difficult for a computer ever to handle the job; yet Google introduced its autonomous car six years later. A technology skeptic at Harvard observed in 2007 that "assessing the layout of the world and guiding a body through it are staggeringly complex engineering tasks, as we see by the absence of ... vacuum cleaners that can climb stairs." Yet iRobot soon thereafter was making vacuum cleaners and floor scrubbers that could find their way around the house without harming furniture, pets or children, and was also making other robots that could climb stairs. It could obviously make robots that did both if it judged that market demand was sufficient.
Sign up for CIO Asia eNewsletters.