Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

It’s not technology, but humans that may not be ready for self-driving cars

Lucas Mearian | July 7, 2016
Tesla Autopilot 2.0 is expected later this year.

The news last week that the owner of a Tesla Model S was killed after the car crashed into a tractor-trailer while its Autopilot feature was engaged raises an obvious question: is the self-driving technology safe?

The accident, however, raises an equally important question. Are people prepared to responsibly use semi-autonomous driving technology?

The accident, which took place May 7 in Williston, Fla., is the first known fatal crash involving a vehicle using autonomous technology based on computer software, sensors, cameras and radar. The Florida Highway Patrol identified the driver who was killed as Joshua Brown, 40, of Canton, Ohio.

The National Highway Traffic Safety Administration (NHTSA) has opened an investigation into the accident.

Tesla Motors, which can retrieve driving log data from its cars, stated in a blog that the Model S's autopilot sensors failed to detect the white semi-truck as it turned in front of a sedan with a bright sky behind it.

Tesla autopilot

This driver posted a video on YouTube showing his use of Autopilot in rural areas without his hands on the steering wheel.

Separately Wednesday, the NHTSA announced it is investigating a second accident that occurred on July 1 and involved a Tesla Model X that rolled onto its roof on the Pennsylvania Turnpike. The owner claimed that he’d just activated Autopilot when the vehicle steered into the guardrail and then flipped onto its roof.

In a statement, Tesla said: “Based on information we have now, we have no reason to believe that Autopilot had anything to do with this accident.”

If the problems that robotics encounter in extreme environments are any indicator, then self-driving cars are a bad idea, according to David Mindell, author of Our Robots, Ourselves: Robotics and the Myths of Autonomy.

Mindell, a professor in MIT's Department of Aeronautics and Astronautics, pointed to the Apollo space program, which landed U.S. astronauts on the moon six times. The moon missions were originally planned to be fully autonomous, with astronauts as passive passengers. After much feedback, the astronauts ended up handling many critical functions, including the lunar landings.

Pointing to a concept developed by MIT professor of mechanical engineering Tom Sheridan, Mindell in an interview with MIT's Technology Review, said the level of automation in any given project can be rated on a scale from one to 10, but a higher level of automation doesn't necessarily lead to greater success.

"The digital computer in Apollo allowed them to make a less automated spacecraft that was closer to the perfect 5," Mindell told the Review. "The sophistication of the computer and the software was used not to push people out, but to give them true control over the landing."

 

1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.