Will unrealistic expectations of driver less cars make them infeasible?

Safety is always raised anytime I see any article about driver less cars . OMG theres a computer relying on sensors. What if there’s an error?

That of course doesn’t apply to human drivers. Drivers that talk on phones, eat hamburgers, apply makeup, watch dvd’s, and gosh knows what else while driving. I don’t know the statistics but suspect most wrecks are caused by human error.

Will the demand for perfection doom any driver less car project? Lets say the error rate resulting in wrecks was 1 car for every 50,000 cars on the road. That has to be much better than human drivers. But would the public accept a few hundred wrecks (nationwide) a year from driver less cars? How will the car insurance industry react?

Also, does the public realize speeding will be a thing of the past? :slight_smile: My GPS already flashes a warning when my speed is in excess of posted limits. The GPS maps know the speed limits on any road. It would be simple to use that information to regulate the speed of driver less cars.

Driver less cars will be here sooner than we realize. What happens after the first wreck? Will that result in public demands to require human drivers?

NY Times poses that question.

I doubt it if the statistics show that driver-less cars are safer then human controlled ones.

I also would say that with wide acceptance of driverless cars speeding will be less of a issue enforcement wise but our travel speeds will increase and roads will be safer and handle more volume. Since the cars will have some general centralized guidance which I suspect will be controlled by some state authority that would supersede the speed limit as any above the speed limit travel by default would be ‘with the state’s permission’.

I think an alert and situationally aware driver could out perform any driver less car. It’s not man vs computer. There’s just so many variables to program for that there will be a few wrecks. Especially when you have some cars human driven that share the road with driver less cars.

The public’s driving habits have degraded so much that I’m beginning to realize driver less cars would be safer. A friend of mine once made a very shrewd observation. We were at the Mall and noticing some really odd people. People I wouldn’t even approach and ask the time. My friend commented “you do realize most of these people drove here and will be driving back home”. :eek: The school screw up that couldn’t walk and chew gum at the same time probably has a driver’s license. :stuck_out_tongue:

I recall last year there was a big outcry because some robotic arms used in surgery had made mistakes. There again, imagine a surgeon that has worked a 55 hour week and is still hung over from the drinks he had last night. I’d trust that robotic arm over the surgeon for very precise and detailed cuts. Nothing can replace a surgeon’s experience and training in decision making. But routine scalpel work can be done more precisely by machines.

It won’t be long before one of these cars is involved in an accident where someone is injured and it will be an additional decade or more before we see them on the public roads again.

That’s what I’m afraid will happen. One serious accident and the program will be set back a decade.

We hold machines to this ridiculous standard of perfection. No one wants to look at it objectively. Human drivers cause this many accidents annually, machine drivers cause this many. Right now there’s no machine data to compare. All they can do is project how reliable driver less cars would be.

I guess my background in computer programming gives me a different viewpoint. I’ve always been amazed at how good computers are for repetitive tasks.

When has the public at large ever been rational about risk assessment?

You know how when a plane goes down, there’s a temporary hysteria followed by a lot of people being too afraid to fly for a while? It doesn’t matter that the 1500 mile trip you want to take is a whole lot safer by plane than by car. You view the auto risk as mundane and dismiss it, whereas you view the airplane risk as exotic and overvalue it.

Similarly, there’s going to be a lot of media attention focused on the new cars, and a general attitude of public ditrust. So even if the automated cars are 10 times safer, every time one gets in a dramatic accident, it’s going to become part of the 24 hour news cycle. People are unable to realize that a few crashes a day from millions of cars isn’t a lot - rather, they’ll see a report about each one, or at least some significant fraction of them - and unable to put that into context, they’ll assume that it’s very risky and unsafe.

Computer driven cars will still make errors, and those errors will require insurance. They will become widespread at the point that those errors are significantly lower than the errors made by human drivers. Race track driving has already shown that computers can be programed to drive faster than the most daring human without losing control. However, the future of driving in traffic isn’t in driving fast, it is in driving safely, and probably slower in most circumstances while the driver/passengers do something restful. If I know that I can read a book instead of drive, all I have to do is leave at the time my computer suggests, get comfortable and read a book, or nap and I arrive right on time. Or leave extra time for traffic slowdowns and have the car drive around the block a few times when I get there.

One issue I anticipate is expecting the human driver to intervene if the driver less car begins to malfunction. Airlines already know that pilots relax when auto pilot is turned on. I think they are required to stay in the cockpit? But realistically you know they are day dreaming, napping or something. What else would they do during a long transatlantic flight on autopilot?

I’d guess local traffic laws will require a driver behind the wheel for many years to come. I’m not sure the driver could react in time if the driver less car’s navigation failed. The driver would probably be distracted watching movies, reading, or sleeping.

The biggest problem facing autonomous vehicles is that the other vehicles are not autonomous. There seems little question to me that the wildcard or anticipating and responding to irrational, random, and illegal actions by human-driven vehicles on the shared road makes accidents much more likely than a circumstance in which all vehicles were computer controlled and able to engage in vehicle to vehicle communication.

That said data will be saved and fault, which will highly likely be due to the human-driven vehicle, will be able to be determined.

What will be the result when the first ten serious accidents involving autonomous vehicles are all determined to have been clearly caused by the driver of the other vehicle and that more serious injury to the occupant(s) of the automonous vehicle (and possibly the human-driven one) were reduced by the rapid response of the automous vehicle, braking more quickly than a human would have been able to have done and deploying the airbag a microsecond in anticipation of contact?

Mind you, I am not stating the the error and failure rate of autonomous vehicles will be zero. But mixed in roads with humans there is little doubt that humans will be at (clearly documented) fault in human-autonomous vehicle collisions at many more times a rate.

I see the likelihood of those accidents as the public relations story that will be the lede. The demand as a higher percent of vehicles are autonomous, will be for lanes that are human driver free, and the ability of autonomous vehicles capable of engaging in vehicle to vehicle (V2V) comunication to form ad hoc convoys with less intervehicle spacing - improving mpg, traffic flow, and safety simultaneously.

There was an interesting article in The New Yorker about driverless cars recently. Turns out that the current Google driverless car prototype has driven over 500,000 miles without an accident. Although it’s had a driver at the wheel ready to take over when necessary, it can currently go about 50,000 miles on a freeway without making a mistake: this is expected to double to about 100,000 miles in the next year or so, and will almost certainly eclipse human drivers within a few years.

By the way … people seem to have little problem OMG depending on machines relying on sensors, trusting their lives on them in many ways. From the simple train crossing to implantable defibrillators shocking their hearts based on sensed abnormal rhythms. The limitation to insulin pumps that respond in real time to detected blood sugar changes is not diabetics not trusting a machine (which will have a defined error rate) but on the sensor technology not yet being up to snuff. Once it is a defined error rate will be considered a known risk offset by the known benefits.

Interesting aside … autolanding of commercial jets in conditions that humans could not manage safely. Been around a long time without much controversey other than cost, despite the fact that sensor error played a role in at least one crash.

Which is a good argument against emergency manual overrides.

There are still two pilots in the cockpit in the planes. To drivers those are the equivalent of cruise control and those cars that park themselves*. It isn’t really an issue of actual safety, it’s a matter of perception, and the Frankenstein Syndrome. Those evil scientists at Google want to unleash another monster on the world. It’s time for pitchforks and torches!

“driverless” not “driver less”. It’s right there in the article you quoted!

If a driverless car crashes, who is responsible? If a human driver makes a mistake, they will tend to be found to be at fault in an accident and have to pay up. Will car manufacturers be held responsible or will there be a designated ‘driver’ whose insurance will be invoked even though they had no input into what happened? Maybe people will only have first party insurance because there is no third party to claim against any more.

I could see some roads, particularly highways, at certain times such as rush hour be autonomous only - want to manually drive, take another route. Perhaps starting with special lanes comparable to HOV lanes, but once this foothold has been established I do expect it to spread to a point if traffic conditions warrant you must switch to auto drive mode or pull over and wait or get off that road with a escape route provided.

The automated systems are generally used in particular when a skilled human pilot could not manage the landing (due to, for example, poor visibility) and would otherwise not be attempting one.

Not quite the same circumstances as that used for cruise control and parking.

And the one documented accident blamed on an autolanding system error demonstrated the lack of back-up saftety gained by having the pilots present.

Note the lack of horror from the flyng public to the fact that planes autoland in circumstances too dangerous for skilled pilots to attempt even in the face of an accident documented to have been caused by system error.

The public doesn’t realize these things. The public aren’t airline pilots, the public drives cars.

I think there’s a touch of overconcern about this issue. Insurers will offer coverage against systemic failure, doing the math just as they do for potential damages caused by human drivers. I really don’t know if insurers will in turn pursue manufacturers for compensation but I don’t think this uncertainty is a dealbreaker for the advance of the technology. If accident rates – and deaths and injuries – are dramatically lowered, then all three legs of the stool (humans, manufacturers, insurers) are the better for it.