I was watching This Youtube about driverless cars. Obviously, this was a while ago and the technology was just emerging at the time. That said, the Google cars have racked up eleventy bazillion miles without an accident.
What specific devices are they using to avoid, say, deer? I’m thinking of situations where I successfully avoided disaster, but I wish I had a GoPro of it.
Mods, the link is to a Nova program, if that is verboten, go ahead and kill the link, but please don’t close the thread.
They use both state-of-the-art image recognition from video cameras and a laser ranging system to constantly update a picture of its environment. In some senses the scenario of you suggest is simpler than many of the tasks it has to accomplish because it only needs to know there is some object crossing its path to be able to take avoiding action. It does not need to interpret what that object might be.
If I saw one of those driverless Google cars, and decided that I wanted to crash into it, would that be possible? If so, what would be the strategy required in executing the maneuver? Or are they so efficient at crash avoidance, that it would be like trying to pick up a drop of mercury, and no matter what I did with my car, they could escape.
Followup: It such a crash were possible, what would the engineers have to do in order to make it impossible?
I’m sure you could. At a minimum, make the car choose between crashing into you or crashing into something else. You drive alongside it, let’s say on a 4 lane major road, and sideswipe it, run it off the road. You come up behind it in traffic and just ram into it.
Well, sort of. The problem isn’t the guidance and control, but recognizing that there is an obstacle, and what kind of obstacle it is, e.g. is it a fixed obstacle to be navigated around, or a moving obstacle that will evacuate itself front the path of the vehicle. The Google cars limited to relatively slow speeds (>35 mph, although having been stuck behind one I’ve noted they typically drive slightly below the local speed limit), and with a few exceptions there is an occupant in the car which can take over the vehicle in the case that it has a problem it cannot cope with. Only four states so far have approved truly driverless vehicles. When the vehicle does encounter a problem that it cannot characterize or deal with, it goes into a fail-safe mode where it pulls to the curb and either lets the occupant take over or calls into a control station to receive instructions or service.
All that being said, provided the software can discriminate between real and apparent hazards, it is certainly true that it can respond at a speed faster than the best reflexes of a human driver, and within the parameters given to it, with more precise and controlled action, just as a traction control system can respond to potential loss of control faster than the best professional driver.
Yea, that is the trick. You don’t want it to slam on the brakes and have the car behind yours rear end you because the wind floats a plastic bag into your lane.
But you do want the car to slam on its brakes if a plastic ball bounces into your lane. (even if the sensors can’t see the child chasing it.)
I am sure that there will still be accidents with robo-cars. Fewer accidents than with today’s cars, but there are still massive legal issues to sort out.
Yes, I don’t see Googlecar discriminating between solid and soft targets; and it can’t tell someone’s trying to jump into the lane then jump out again. Common sense is lacking. But software can outperform a human in evasive action. Presumably it monitors 360 degrees so it knows whether swerving is safe or not and maintains separation for safe stopping if it’s a load falling off the vehicle ahead. Presumably too it has 360-degree dashcam, so it will be easy to affix blame. However, what’s the old saying - “It’s hard to make anything idiot-proof because idiots are so clever.” If you are determined to crash into it, you might succeed.
One thing I wonder: Has anyone tried putting one of these cars through the same drivers’ test a human needs to pass to get a license? How did it fare? If it’s good enough it’s good enough-- If a robot that passes the test isn’t allowed on the road, then a human shouldn’t be, either, and the test isn’t hard enough.
It seems unlikely to me that they could cope with every unusual situation that comes up. It would seem to require true artificial intelligence. Could it deal with these situations?
-there’s an obstacle in the road. Is it worth swerving around an possibly causing an accident? Can it tell a cardboard box from a giant hunk of steel? A dog from a baby?
-there’s water in the road. Can it figure out how deep it is, whether it is safe to cross? Can it recognize black ice?
-can it accept hand signals from a police officer?
-can it deal with construction zones, conflicting lane painting, unmarked pavement etc? Unmapped dirt roads?
-There’s an imminent danger in the road. Can it decide whether it’s worth running off the road to avoid it, such as deciding to run off into a shallow ditch vs not driving off a cliff?
It’s like outrunning a bear. It doesn’t have to be the best at every situation, on balance it just has to be better than people - and more cost effective.