How did the DARPA robots become so much more advanced in one year

Last year all broke down within 7.5 miles and this time pretty much all went over 7.5 miles and four made the entire 132 miles.

Did robotics undergo any major advances in the previous year that made huge improvements or was everyone just unprepared for the test last year and the technology remained pretty much the same between 2004 and 2005?

Could you give us a bit more background, here? What test was this? What were the robots supposed to do, and is it the same task every year?

My outside observers guess is not that robotics made leaps (meaning not the mechanics) but the software improved. Much of what went wrong last year (of what I saw) were vehicles that did not navigate obstacles well enough. I think they refined the processes where they detect and make decisions about how to approach the problem.

Last year, DARPA (the main Army research-funding group) sponsored a contest to see if anyone could build a robot that could travel unguided over about 130 miles of desert terrain, with a prize of a million dollars. All of them failed pretty quickly, in 7.5 miles or less. This year, they upped the prize to $2 million, and several robots made it through the whole course. A brief Reuters overview is here .

Hey, I had the same question! He’s talking about the DARPA Grand Challenge. Last year, the predictions were that everyone would fail, because off-roading is so much more difficult than just navigating. And they did fail. But this year a bunch succeeded for no obvious reason.

On theory is that some of the groups like MIT just ran the same vehicle, so I suppose they had a year for improvement, where last year they had a year to do initial design and build.

I have heard several people claim that this year’s course was a lot easier than last year’s.

Accordingn to the New York Times:

I think a big part is knowing what to expect. Last year (2004) was the first year it was run. Even though the contestants were told what to expect I don’t think they were fully prepared until they tried and failed the previous year.

I also heard that, so I’m sure its part of the explanation.

In other words, they used to failures of the previous race to pinpoint the most likely problems and areas for refinement. This is the whole point, from an engineer’s perspective, to physical testing: run it until it fails, figure out what you missed when you were first thinking about failures, and then engineer that problem away. Repeat ad infinium, or at least until the probability of failure is sufficiently low that the technology is considered “mature”, i.e. small and readily predicted. Unfortunately, engineering practice is getting away from physical testing in favor of the sexy analytical tools we now have which will (supposedly) reduce development time and expensive prototyping. The problem is that the analysis only gives you the results you asked for; if you don’t take into consideration an unexpected failure mode then it’s unlikely the simulation will identify it.

There are also what are called “low hanging fruit” or “two sigma” problems; ones that can be addressed and resolved only with a moderate amount of effort to a greatly increased efficiency. It isn’t so much a matter of advancement than refinement; think of sanding down a board as opposed to cutting away at it. We’d expect a second run, diagnosing the problems and reinforcing against failure to show a marked improvement the first time around and progressively less improvement with successive iterations.

Stranger

I read (somewhere) that this year’s contest received a lot more attention and that entrants had access to more $$$$. Combined with what others have said about second attempts being much easier, it’s not too surprising that they pulled it off.

Last year, the entrants did not have sufficient time to prepare their entries. That’s basically all there is to it.

I’ve spoken a bit with the CMU crew who wen’t the furthest last year and they basically put it to me like this:

  1. There really wasn’t much time last year to do anything but assemble a barebones system, much of the time was spent prepping the hardware and the software put on there was rudimentry. They actually only started writing serious code about 1 month before the code freeze. By the time the race had started, they already had significantly better software but didn’t put it on because it wasn’t tested. So, in a way, this year they had 13 times as long to write code as they did last year.

  2. There are a series of problems that need to be solved in order for a robot to successfully drive. No team at the race managed to solve all of the problems. Which meant the first time a problem popped up that they couldn’t handle, the robot would die. Thus, it’s not surprising that no robot made it very far because each one would hit a different snag. But what was really amazing was that each problem had been solved by at least one team. Since all code is shared at the end of the race, every team then knew the solution to every problem.

  3. Never underestimate the power of a torrent of government and industry sponsership and a vast army of graduate and undergrad students willing to work on the “cool” projects. There have been lots of other problems which have languished for over 10 years until they suddenly became fad of the month and were solved almost overnight.

The CMU guys said they wouldn’t be surprised if at least half a dozen teams finished the race this year and this was almost immediately after the post-mortem of last years race so they seemed to be accurate in their predictions.