Among the teams that built on their race experience was David and Bruce Hall. In 2004, the stereoscopic camera system they used for navigation allowed their converted Toyota pickup truck to travel 10 kilometers and take third place, though they scrapped the system for a prototype laser imaging system.
Using a bank of lasers on a rotating drum on the roof of the car, the system was able to bounce light off most objects in the vicinity. By measuring the strength and delay of the reflected beams, just as aeronautical radar does, a computer could build up an accurate 3D map of the surroundings.
The LIDAR (light detection and ranging) sensor took the car 40 kilometers before a steering control board failure ended its race. The team came in 11th place out of 23 finalists, and the sensor drew a lot of attention.
"By the third challenge, everyone wanted it," said David Hall in an interview at the headquarters of his company, Velodyne, in Morgan Hill, south of Silicon Valley.
A year later, when Velodyne offered a more compact version of the 64-laser LIDAR unit, it quickly started receiving orders from other DARPA Grand Challenge teams. In 2007, the next year the event was held, five of the six finishing teams were using Velodyne LIDAR, including the first- and second-place cars.
One of those early LIDAR prototypes is today in the Smithsonian's National Museum of American History, and Velodyne has gone on to produce hundreds of LIDAR units for commercial use.
Perhaps the most visible use is atop Google's driverless cars. At any time there are about a dozen of the vehicles on the roads of Northern California. They are mostly modified Lexus RX450H cars, with a few Toyota Prius vehicles, each with one of Velodyne's $80,000 LIDAR sensors.
Google says its main goal is to make driving safer, more enjoyable and more efficient.
"Over 1.2 million people are killed in traffic accidents worldwide every year, and we think self-driving technology can help significantly reduce that number," the company said via email.
But while driverless cars are slowly becoming more common on California roads, they're still at an early stage of development. Nothing demonstrates this better than the amount of preparation required before a self-driving car can hit the streets.
The LIDAR sensor on the roof pulls in thousands of points of data every second to produce an accurate 3D model of the car's surroundings, but that isn't enough for the car to reliably drive itself. Before that can happen, a Google car with a human being at the wheel must first drive the streets, mapping the surroundings.
"By mapping things like lane markers and traffic signs, the software in the car becomes familiar with the environment and its characteristics in advance," Google said. "When we later drive a route without driver assistance, these same cameras, laser sensors and radars help determine where other cars are and how fast they are moving. The software controls acceleration and deceleration, and mounted cameras read and interpret traffic lights and other signs."
Sign up for CIO Asia eNewsletters.