In the current Scientific American is an article about one of the entries in the DARPA Grand Challenge race for autonomous vehicles.
Apparently, one of the problems is buggy programming. This automated Hummer flipped during a test because someone had made a mistake programming how to take curves (it went too fast).
I thought, well, seems like my brain has many programs going on simultaneously, and sometimes I’m chugging along doing something and suddenly realize something is wrong, so I stop and check it out. Often, I was in the midst of making a mistake.
So then I thought, hey, computers can multitask, too. Why couldn’t the robot car have one or more other programs running at the same time as the main one? The others would be like “back-seat drivers”, independently checking things like angular momentum, tilt, whatever, against safe ranges. Then, if something was going out of the safe range, the nagrivator [sic] program could yell “hey, slow down, you lunkhead!”
They could even be running on different CPUs if necessary.
(Actually, don’t the Mars rovers do that sort of thing? Or does the driving program check itself?)
Anyway, is that sort of thing being done in the computer world? If so, any idea why it wouldn’t be done for an auto-auto?
Well, not a bad idea but you run into the classic problem – a man with one watch knows what time it is; a man with two watches is never sure. So what if your monitoring program is the one with the bug?
There is some redundancy used in the space program. The Space Shuttle flight control consists of five computers. Four of them are running the exact same software and the output is continuously compared. If one gives a different output, the other three can vote the faulty computer out of the loop. This guards against hardware failure. A fifth computer runs a different software, designed to do exactly the same task but written by a different group. If the pilot decides the primary system (i.e. the first 4 computers) has failed, he can push a big red button to switch control over to the fifth computer.
But this does not provide automatic redundancy against software bugs, and for good reason. If you give the backup computer the ability to override the primary computer system, it may do so by mistake. If you design it so that the two must always agree, if either one makes a mistake everything stops. Either way, instead of making the system safer, you have created more ways for it to fail.
By the way, the Shuttle computers are a special case. For something like an unmanned scientific satellite, there is usually just one computer for each task. There is still some safeguards - for example, a “watchdog” circuit detects when the computer has frozen and reboots the computer. And the software can usually be updated from the ground to fix problems.
Actually the car’s I teach on do something like that. One of the nodes on the car’s network monitors the car’s yaw rate. Also the steering wheel input is monitored. In the event of my entering a corner too fast, and the car not responding to my steering inputs, the computer will (as needed) retard throttle, apply brakes to individual wheels, to make the car go where I have pointed it with the steering wheel. These decision / response cycles are about 7ms long, much faster than you or I can think.
In additon, another node is monitoring suspension height, and suspension repsonse to recalibrate the shock absorbers 500 times per second.
If we move this into our SUV, there is added a roll rate sensor, to prevent roll over by applying and releasing the outside front brake for just a few milliseconds to prevent the SUV from getting tippy.
I think the problem with the DARPA cars is not one of technology, but one of time and money. The systems I described above cost a bunch to design and have years of testing behind them. The guys building the DARPA cars are spending all of their time and effort trying to make the car independent, they don’t have time or money for the bells and whizzy stuff.
This amazes me. I thought this issue was resolved ages ago. You want to go right you pull the right rein, ou want to go left you pull the left one, you want to stop pull them both and when you want to go you use both to give a smart crack to your steed’s rear. Curves? Feh! Keep him at a stately trot and you’ll be fine. Doesn’t take 4GL AI to figure that out. …What?
A lot of it is inadequate modeling. It’s normal to try to reduce a vehicle going around the curve to a simple acceleration problem when it’s really much more complex. So I know my yaw rate compared to the angle of my wheels. When a human drives they use much more input. Driving by the seat of your pants is more than an expression. A person can sense the texture of the road and how the tires are gripping it from touch and sound. That’s why people can often outperform antilock brakes. We can certainly make sensors to detect all that but I don’t think anyone yet has a way to integrate all that non-linear information and figure out where the steering wheel and throttle should be. Easier to teach a monkey to drive on a skidpad than to teach a computer.
Anyway, to address the issue of “the backseat driver could be buggy, too” – in my mind, the backseat driver programs are tres simple. Much less chance of their being buggy. And when they say “stop” it might just introduce a timeout, so that after a few seconds the main program starts out again. Or maybe it could force a “slow down” command or something appropriate to the condition it detected, and then the main program takes over again.
(This reminds me of the story about the guy who allegedly put his new motor home on cruise control and went in the back to make lunch.)
The Skytrain system here in Vancouver has been running under computer control since 1986, and has never had an accident with the trains (although it sometimes panics and needs to be manually restarted). In fairness, I should point out that it’s fairly simple: trains don’t go fast enough to tip over, all they have to worry about is the train ahead and where the stations are, and any obstacle on the track just stops the approaching train until the staff tell it to proceed. It’s still some very successful programming.
According to a member of the control room staff I talked to long ago, each train has its own process on their bank of servers that talks to the processes running the trains ahead and behind. There are also independent processes watching for obstacles and panic situations, ready to shut down one or more trains if necessary. A few years ago they lost communication with a section of track, and each train stopped when it lost sight of the preceding one. It had to wait until the lost one got out of the ‘blind’ section before the control process would let it go.
The car problem is a lot harder, unless you’re ready to retrofit all the highways with guidelines for the auto-autos, and the manufacturers are willing to set the software up with performance specs for the vehicle. Then it’s even worse, because the software can’t tell about loading, tire wear, the pavement surface (did it just rain?), and all sorts of other stuff. So you’d have to handle Padeye’s ‘seat of the pants’ driving, which is very complex and subtle. It would be easy to slip up somewhere, and hard to prove either way. Liability suits, anyone?