Why the Popular Resistance to Driverless Cars?

I think the more interesting question is the one asked by a previous thread: should a driverless car be programmed to save its driver and passengers regardless of outside casualties?

People fear things that are new, and minimize the risks of things that are common. This happens with a lot of new technology. Lots of people were afraid of cars*, electric lights, trains, radios, microwave ovens, indoor toilets, online shopping, etc.

*The original fears about cars are interesting because cars did, in fact, kill lots of people…but people ultimately didn’t care because of all the value cars added to their lives.

If/when the technology becomes commonplace, there will come a day when only poor people and eccentrics drive cars themselves, and it will be big news every time one of them is involved in a crash that kills or maims a good old fashioned normal person in a self-driving car. Perky White Female Honor Student Dead In Crash With Manual Car will be on the news on a weekly basis until driving is heavily restricted or banned.

Yes this. Once self-driving cars pass the point of thousands of people using them safely, public opinion will flip very quickly.

Actually I think this is pretty much the only question that ever gets asked WRT self-driving cars, both on the dope and in the wider media. This thread was a refreshing break from that.

It’s not really how safety-critical software works.

There are already plenty of control systems out there in the world that make decisions that could potentially injure or kill humans. There is no algorithm programmed in of whether to kill Ann or Bob; the system just does all it can to keep casualties at zero. If it turns out there is some situation we hadn’t thought of where that proves not to be possible, then the software is recalled and the hole patched.

No-one is going to program in a death panel.

If you’ve ever had a computer crash or freeze, or found bugs in a program, then you’ll know why people are reluctant to trust a computer with their lives.

How do you isolate things that are dependent on each other?
The “key systems” themselves will need updating…and the only way to do that is to connect to the network.
There are millions of situations that the car can handle, based on programmers using simulators in the an office. But there are millions of situations that nobody has yet thought of.
And after many thousands of driverless cars are on the road, those situations will get discovered. There will be a need to update every car to learn the new procedures.

And that means that the car will have to be connected to the internet. The Internet of Things is going to include cars.

For a specific example, lets take the famous Tesla crash last year. The car confused the square shape of an incoming truck with the square shape of an overhead sign. So the programmers learned that they have to update the software to take in into account the angle of the sensors or whatever. That update has to be delivered through the comunications network,yet it directly changes the “key system” of the car which determines how not to crash into a truck.

If my home computer gets hacked, it’s a hassle, but not serious. I don’t want a car that can get hacked, or locked down by ransomware.

I understand most airline flights are pretty much flown by computers. There are pilots monitoring the whole thing, of course, but they probably could do away with them.

People can hack the majority of cars on the road right now. Cars have had computers, and have thus been hackable, for quite a while. Your ability to control your car can be taken away from you already.

Driverless cars would arguable be more hackable (not sure myself), but it would only be a matter of degree instead of an entirely new problem.

For those concerned about bugs/glitches, there are principles in software engineering to produce software with a mean time to failure (MTTF) that is beyond the heat death of the universe. Why don’t all developers build software this way? Because it is time consuming and expensive. However, some software is built this way for particular safety systems.

Also, it is possible to design software to do what’s called “fail safe”. In other words, when something goes wrong they will attempt to enter the safest possible state. The classic example is a control system for a valve. If the safest state is for the valve to be closed than when a failure is detected (again very possible) the failure state code will close the valve. Obviously for some systems, and depending on the specific failure, a true safe state may not be possible. The classic example is software controlling an airplane in flight. There is no safe state in such a case. That’s why airplane flight control systems are designed with incredibly high MTTF.

Similarly, it is possible to build software which is virtually impossible to hack by proving that it responds properly to all inputs… Again, building such software is time consuming and expensive.

That’s a myth. The pilots actually do a lot on every flight, and there’s no possibility of dispensing with them.

See
We are told that planes basically fly themselves. How true is this?

I suspect (with completely no evidence) that people find deaths caused by humans more acceptable because there is an easily identifiable party at fault, at least theoretically. With driverless cars, deaths, even if they are fewer, could be perceived to have “just happened,” and that can be scary to some people. Again, just my armchair psychologist idea.

Computers do so with less frequency than people crash or freeze or have bugs in their programming, including while driving, yet I am as ok as I can be trusting someone else driving me or vice versa.

I think this is a big part of it as well. Most people aren’t familiar with highly-reliable computer systems that are only found in certain industries. People have consumer quality technology which often has problems. I would guess that every month, consumers have to deal with a few technology issues which require rebooting/resetting a device like TV, router, cable box, phone, etc. That does not give people confidence that a highly complex device like a car will work reliably enough to not kill them.

The real inflection point is when the superior safety record of self-driving cars tips the insurance market to the point where insurance rates for human drivers become seriously painful.

The discussion seems to have focused almost entirely on safety and liability issues with autonomous vehicles. And those are real issues. But the other big one is elimination of jobs for drivers, something like 5 million people in the US drive for a living, in some form.

There’s a long historical track record that advances in technology have not in the long term reduced employment, or else the difference between the 90% or so of the population that used to do agriculture and today’s tiny % would be sitting around. But it’s also a clear historical fact that such transitions can be very hard on some people.

So whether or not ‘this time it’s different’ as in predictions of AI causing widespread permanent unemployment, it would certainly have losers. And one of the general features of the populism sweeping many western countries (left and right wing versions have this in common, even if they often hate each other too much to admit it) is not being as satisfied with the answer, from so-called ‘elites’, that ‘the general welfare will be advanced even if there are some losers and some winners’.

This isn’t as much of an issue for personal vehicles. In that case if people really want the car to drive them (I like others do not, at all) so be it and the effect on others of safety/liability is the issue. But again for commercial driving there will IMO be a serious political issue wrt employment effects even shorter term, though maybe safety/liability will be used as excuse instead of being ‘against progress’.

If the car is on a winding mountain road and comes around a blind curve, where some people are standing there blocking the road, there in fact would be a dilemma created much like everyone talks about. Or if an accident up ahead causes several vehicles involved to be skidding across the roadway, blocking all routes momentarily (the autonomous vehicle is just a few car lengths back and is at highway speeds).

Now, as I mentioned, a realistic computer system can’t be expected to even always correctly identify what is even in front of it. It doesn’t need that capability, it just needs to be able to use LIDAR and cameras (and maybe radar and ultrasonics) to determine which part of the scene ahead is safely drivable, and to plan control inputs predicted to steer the car into the safest drivable region of the area ahead, and to send the plan to actuators to implement it. If multiple drivable regions are available, it would pick the one that obeys traffic laws and causes the car to stay in it’s own lane, but avoiding collisions trumps obeying any laws.

So in the case of people across the road, the car may not even identify them as people. (it might, depends on how they are dressed and lighting conditions and so forth). It might simply stay in it’s lane and apply the brakes at maximum if there is no direction to swerve that avoids a collision, or it might swerve into the other lane if say the people standing there are a few feet back, which means less energy at impact.

Well funnily-enough, this is just the example I have used many times on the Dope, when I want to illustrate how these contrived situations are not thought out properly. :smiley:

The first issue is the concept of blind bends. Yes, they exist, but they’re atypical. For most bends the road is wide enough to see much of the road ahead as you navigate the turn, and you position your car to maximize your view.
If, hypothetically, I’m on a mountain road with a sheer drop on one side, and I encounter an actual-factual blind bend, I slow right the hell down.

Then we ask ourselves the question how quickly would I want a self-driving car to go round such a treacherous road? And the answer clearly is: slow enough that it could do something sensible in time if there were a brick wall around the corner (because there could be a pile of rocks, or a parked car essentially acting as a brick wall). If that makes it a tad slow for such routes, so be it.

It’s still hard to visualize a scenario where there is no safe direction at all the car can swerve yet it still has time to perform some maneuver. But regardless, I don’t see how the dilemma applies here.

Agreed.

Much software is garbage, & there is a strong tendency to release it before the bugs are worked out…

By the way, to answer the title question. I know about the statistics - that if I make it to a flight, the car drive there was the most dangerous part. Still, when I get on an airliner, and they seal the door, at a certain level I am quite aware that I’m trapped. Either I exit the plane in handcuffs (for creating a disturbance if I were to get up and try to leave), or it takes off and we either make it to the destination, or someone who is not me screws up and we plummet to our deaths. You read about past aircraft total losses, how in some of them, the passengers would have been aware of their fates for multiple minutes or even hours in some cases.

One theory (that fits all the facts) as to what happened on MH370 is that an electrical fire under the cockpit incapacitated the pilots. (this explains why the pilots changed course and turned off their transponder - the emergency response to such a fire involves those breakers). Since the cockpit has a separate air supply, the pilots probably became unconscious from inhaling combustion products, and since the cockpit doors are reinforced, nobody else could get in. It’s quite possible the passengers and flight attendants were still alive and well for many hours afterwards, until the plane ran out of fuel, clawing at the cockpit door in desperation…

Anyways, if you get into an autonomous car, it’s the same idea. Sooner or later they are going to remove the steering wheel. And even if there is a manual override, the switch to enable it could fail or the software could fail in such a way that it ignores that switch. Sooner or later, you would add servo-driven locking mechanisms to the doors, like on an elevator, that lock the doors when the car is moving so the occupants can’t bail and hurt themselves…

Except, there’s no a silver haired, steely eyed captain at the helm. There’s a computer program, and, well.

My home computer rarely hangs up or has a problem requiring rebooting - probably no more than once every month or so. So, I can estimate that my nice selfdriving car will have a wreck caused by a computer problem only at about that frequency.

Where do I sign up?

And the automobile industry sure ain’t one of them!

We trust airplane engineers to make an autopilot that is robust and safe.
But we have 100 years of experience with the automobile industry–and they have never added a safety feature to a car voluntarily.

For them, marketing, is everything. And we are seeing that in practice now, as the new autonomous cars begin to appear—the various companies all fight for publicity with flashy shows at exhibitions, all making unrealistic promises that their model will be the first one on the road within the next couple of years.
The marketing departments will put cars on sale long before the engineers have adequately tested the software.