Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Changes afoot as DARPA challenge robots prep for upgrade

Sharon Gaudin | July 29, 2014
WPI robotics team works to make its robot more autonomous.

They may dedicate another onboard processor to the robot's vision, because it's so computationally intensive.

Researchers at Worcester Polytechnic Institute describe how they're going to make their robot more autonomous for the DARPA robotics challenge.

Many of those types of decisions will be made this fall. However, the WPI team has already been making big strides in autonomy.

DeDonato explained that the robot's handlers previously had to give it explicit instructions if they needed the machine to walk forward and open a door or grasp a tool, for instance. Getting the robot to pick up a drill meant the handlers would have to tell it exactly how far to extend its arm, turn its wrist and then close its fingers.

Now they've reached an autonomy milestone where the handler simply tells the robot to pick up the drill.

"We're starting to see it come together," said DeDonato. "We're trying to minimize user input. I think it's a big step forward. This is really along with DARPA's vision. We're monitoring, but we're not making those low-level decisions for the robot. We're trying to condense the commands that go to it."

WPI's robot, Warner, also has achieved a higher level of autonomy when walking.

The robot Warner grasps a wood board on its own as WPI team leader Matt DeDonato observes. In the past, the task would have required multiple keyboard commands and mouse clicks. (Photo: Andrew Baron/Worcester Polytechnic Institute)

Instead of handlers telling the robot to take 2 steps to the right and then eight steps forward, for example. They can now tell the robot to walk across a room and the machine will navigate around or over obstacles to make its own way.

DeDonato said the robot itself is now using much more of the data from its own vision. Instead of simply sending what it "sees" to its handlers, the robot can decipher the information and calculate how to use it.

"We're moving the autonomy up one more level," he noted. "Eventually, the goal is to move to the point where the user doesn't have to be there."

That time, though, won't be anytime soon, according to DeDonato.

"We're probably nowhere near it," he said." We're probably 10 to 20 years at least from full autonomy. The problem is the environment is unknown. To go into a room and assess the damage and decide what actions to take is the Holy Grail. It's not out of the realm of possibility, but that's the movie robot. That's still a long ways off. But we're on the path."

 

Previous Page  1  2 

Sign up for CIO Asia eNewsletters.