Why the Popular Resistance to Driverless Cars?

Forget about the economics of the matter for a minute. Obviously, people who make their living driving aren’t going to be happy about the prospect of driverless cars. I’m talking about regular people who say they’d never use a driverless car. Which people? Well, basically all of my friends and everyone on my team at work. It was a slow Friday yesterday and, somehow, the subject came up and I was the only person on my team who thinks driverless cars are a good idea. To me, the case for driverless cars is so obvious that you’d have to be an idiot to fail to at least grasp its theoretical merits. Consider: [ul]

[li] Driverless cars can’t text and drive, drink and drive, or fall asleep at the wheel.[/li][li] Driverless cars would have reflexes almost infinitely quicker than even the best human driver.[/li][li] Driverless cars would be simply incapable of breaking traffic laws.[/li][li] Driverless cars would know, to the millimetre, the relative position of every other car in the vicinity. None of that ‘objects in the rear view mirror’ crap.[/li][li] With fleets of driverless cars cruising the roads, just waiting for prospective passengers to flag them down, auto theft would essentially be made a thing of the past.[/li][li] Taking fallible humans out of the equation would dramatically reduce the number of car accidents which, in turn, would dramatically reduce the number of traffic snarl-ups, which, in turn, would result in faster journeys and increased fuel efficiency.[/li][li] Passengers would have more free time. Instead of focussing on the road, you could read a book, catch up on e-mails, even meditate.[/li][li] It would be easier for disabled people to get around.[/li][li] You’d never, ever have to go to the fucking DMV ever again.[/li][/ul]

And that’s just off the top of my head.

I can think of some downsides, too. Initially, driverless cars would be very expensive. However, like all technology, they’d get cheaper over time as manufacturers figure out less costly ways to make them. Once they reach a certain level of popularity, manufacturers could utilise economies of scale to bring the price down even further, so this would likely only be a temporary problem, albeit one that might persist for quite a while. It also might be more difficult to determine fault if there’s an accident. Also, there’s the very worrying possibility that the cars could be hacked. I don’t know enough about software encryption to speculate about the means by which manufacturers could prevent this, but even in the worst case scenario we’d at least get some decent episodes of Black Mirror out of it :slight_smile:

The thing is, pretty much every I’ve spoken to has a different objection. They just don’t trust driverless cars. Everyone on my team gave a variation of the same argument: Driverless cars aren’t perfect. There’d still be accidents.

Well duh! Of course there’d still be accidents. But driverless cars don’t need to be perfect. They just need to be better than us. And we, generally speaking, suck at driving. There are about 30 thousand fatalities on US roads every year, nearly all of which are caused by unforced human errors. You may be a great driver, but that doesn’t matter if you’re surrounded by bad drivers, and nobody is ever far away from a bad driver. Given our lousy track record, driverless cars can only make transportation safer.

So, is there a reasonable case to be made against driverless cars which outweighs the obvious benefits? Or is this a case of people simply being afraid of the unknown?

Giving up control.

For many people, their car is an expression of who they are or how they want others to see them.

People over estimate their driving ability, their reaction time, their attention span, and their ability to identify and appropriately respond to hazards.

People also like to be in control, even if statistically, they are less safe if they are the ones in control. For instance, people who drive to the airport, and then are afraid of getting on the plane, even though they were several times more likely to get themselves dead on the drive over.

Relinquishing control to another human is hard enough for most people. I know I don’t like to be in cars that other people are driving. Relinquishing control to something that is less understandable is even harder.

As long as people are out there who think it is fun to hack into systems and plant viruses, I will not ever trust a driverless car. If you want me to trust the computer in the car, then you had better do something about hackers first.

Because humans are really, really bad at risk assessment. We spend billions and billions on preventing terror attacks while most Americans are much more at risk from heart disease and, well… death by automobile.

I’ve seen analyses that focus on the flaws of driver-less technology. And to be sure, there are flaws - no technology is perfect. But (cite needed) the last time I checked something like 30,000 people die in road deaths each year. Say the flawed technology could cut that in half. People would still focus on those deaths and, I suspect, ignore the overall point that far fewer fatalities took place.

I’m completely onboard with mostly getting rid of human drivers, but I think the sticking point will be human objection, not the technology itself.

The whole concept is fascinating. But it’s only just beginning to be developed. Like the old phrase concerning a little knowledge: I know just enough to be dangerous.
(ETA: No need to turn it loose on the public just yet.)

Much has been done about hackers. If you are talking about preventing them from getting into and messing with the system of self driving cars. It is pretty much impossible to get control over a self driving vehicle without physical access. If you are talking about “doing something” about hackers, you’ll need to define exactly what that “something” is.

Are you concerned about someone getting under you hood, and installing servos and actuators on your steering column and throttle? That is about the same amount of work that would be required to “hack” a self driving car.

I’ll happily let the robocar drive me around safely and efficiently.

BUT

I demand a manual override. A REAL manual override. Hand pulls lever/pushes button, car becomes dumb tool that does what the ape in a suit wants, or at the least pulls over. Heck, I already have an issue with this about door locks: in any car of mine pulling the inside front door handle had better allow me out no matter what any system thinks.

I think the popularity will increase.

The public opinion is more nuanced than the polls I have seen are reporting, some show many opposing them, some show most approving them, but most likely it depends on what it is being asked about:

In this issue I think that whatever opposition is seen among most of the public is not bound to remain much higher thanks to an item that is applicable regarding when and how new ideas or technology are adopted.

When the population begins to notice how the powerful and popular people are seen not only adopting the new technology, but that they are benefiting from it, then they begin to not only approve of the technology but to see it as a necessity. (Or just plain old “did you see what the Joneses got? We should keep up!”)

One big item that I think will accelerate the change: when insurance companies start giving big breaks to the teen kid drivers among the early adopters.

  1. If a driverless car does cause an accident, who’s to blame: the manufacturer of the car? The software writer? The person sitting in the car?

  2. Humans can make decisions when a crash is about to occur. A software needs to be programmed for every detail. So you write code for the following situation:
    One/ more pedestrians are on the road in front of the car. If the car swerves, it hits a wall and the crash will kill (or severely injure) the passenger. If the car drives straight, it will kill (or severely injure) the pedestrian(s).
    What decision do you program? Is it numbers: one pedestrian over two passengers, two passengers over one pedestrian? AGe: a child pedestrain over a 60+ passenger?

  3. Tests have shown that following the rules exactly down to the letter , as long as human drivers are also on the road, leads to accidents becasue human drivers expect certain reactions, while the robot reacts differently.

  1. How is this different than malfunctions that already occur?

  2. It’s false that cars are programmed for every detail. I think you have an outdated idea of control systems. Also, how is automatic decision making different than human? Different people will have different reactions in a situation like that. So?

  3. I’d like to see a link to those studies. Even if they do get in accidents, so what? Do they get into accidents more than humans do? No.

Malicious actors can hack into your car systems and plant viruses or shut down critical functions right now, and that will continue to get worse as cars become increasingly dependant upon computer-controlled systems for safety, reliability, and comfort. A driverless car can actually be made safer than one requiring a human driver by virtue of having an isolated diagnostic system independent of the drive controller which looks for signs of malfunction and puts the car into a fail-safe mode if it detects something going wrong, whereas current systems merely alert the driver (if that) should a problem occur such as loss of throttle control.

Right now, the resistance to autonomous passenger vehicles comes from a loss of control. People are not cognizant of the many advantages of such systems; not just being able to travel without worrying about the state of intoxication or being able to read or watch videos while driving (which people do now anyway), but being able to se the car out on errands or pick up passengers autonomously, being able to engage in long distance travel without concerns of driver fatigue; having faster reactions times and better judgment in inclement weather, et cetera. In fact, once autonomous vehicles become commonplace it is likely that they will be viewed as more of a service than a necessary capital expendiature, and in urban and suburban areas you’ll just order a ride from a car service which, not having the cost of a human driver and high liability insurance, will probably be cost competitive with owning a private vehicle.

There are still applications where a human driver (augmented by driver aid systems) makes sense, and will always be a subculture of people who like to drive for recreation, but once the technology of autonomous vehicles becomes mature (which is still a fair way out) and the benefits become apparent to the public, I expect there will be a fairly rapid transition to the use of autonomous vehicles in urban and suburban areas where cost and convenience will drive adoption. In rural areas where autonomous vehicles may not be as available, and among performance driving and off-road enthusiasts there will be resistance as autonomous vehicles fail to meet their needs, but that represents only a small fraction of drivers.

Stranger

The issue of liability is one that will have to be addressed by law and business culture, but there is every reason to expect that autonomous vehicles will have to carry liability insurance just as drivers do on their cars now. However, with expected reduction in actual accident rates by autonomous vehicles with much faster reaction times and no distractions, the liability limits can be far higher with much lower premiums, all of which can be built into a pricing or operating cost model. The issue of legal liability in terms of a defect in a product is well established (if often contentious) and that a manufacturer has a duty of car to test for and address defects or weaknesses in a product.

The notions that humans make better moral or ethical decisions on what to do in an impending crash is not borne out by any evidence, and in fact in the split second before an unavoidable accident occurs people often freeze and make no decision whatsoever, or panic and take action that compounds the problem, to wit someone faced with hitting a deer in the road trying hopelessly to steer out of the way, resulting in oblique impact or rollover instead of a direct hit that would likely pitche the deer over the vehicle. People are not good decision makers in panic situations (hence, the use of the word “panic”) unless they have been extensively conditioned to react in the appropriate way, whereas an autonomous driving system will follow the protocols for a situation regardless. In any case, an autonomous system that has an array of sensors maintaining a fully integrated situational awareness at all times without distraction or fatigue will likely be able to discriminate far more potentially hazardous situations and take actions to avoid or prepare for them, vastly reducing accidents such as pedestrians entering the roadway.

I don’t know what “tests” that you are referring to, but since any practical testing would have to be based on the current, still-immature autonomous driving systems it doesn’t really address what more mature systems can or will do. Complaining that quasi-autonomous systems such as the Tesla Autopilot or the recently pulled Uber driverless system can’t anticipate and address less-then-optimal human reactions is like complaining that Leeuwenhoek’s microscope isn’t able to image individual molecules.

Stranger

It might not be rational, but I’ve seen too many brilliant minds find ways to do the “virtually impossible” to trust a computer driving my car.

And unfortunately, since I wasn’t argued into that position, there is no way to argue me out of it. And that is why, I believe, the popular resistance for self-driving cars.

Yes, but even that is being countered as we speak:

The key to the efforts at preventing the dangers you are talking about is called: high-assurance software:

[quote=“Snarky_Kong, post:11, topic:783521”]

  1. How is this different than malfunctions that already occur?

[Quote]

Currently, after an accident, there’s a trial with experts on whether it was the drivers fault (most often the case) or whether some part of the car malfunctioned, so a manufacturers fault. The second case often leads to a recall, a change in legislation or similar.

With driverless car, one scenario the experts worry about is that the manufactuer of the hardware and the writer of the software can move the blame endlessly between themselves, without thinking about hackers (which indeed are already a problem). Furthermore, legislation needs to be changed, since current law all assume a human as driver who can be held culpable. So how would the law be written: is the owner of the car automatically guilty, because it’s assumed that the car was manufactured 100% safely? How do we investigate if a hacker attack occured? If it’s not he owners fault, how do we determine the hardware or the software (or both) given how difficult it already is to troubleshoot a PC?

I don’t think I have an outdated idea, because my ideas come from the experts in robotics and ethics discussing these very scenarios.
Yes, at the moment, humans make these decisions unconsiouly. But for a system, you have to sit down and decide. And a not-insignifcant number of humans don’t like to openly and deliberatly put a value on human life compared to others. Most of the time we don’t think about it, hoping to never get into such a situation.
Those who do have to assign values, like doctors or first responders doing triage, find it often stressful despite guidelines.

It will take me some time to search for them. And the problem wasn’t if they have more or less accidents than humans. The problem was the promise at the beginning that there would be zero accidents, which turned out not to be true. Also, the rigidity with which they followed the rules compared to human drivers who take a lot more factors into account showed how complicated the programming still is.
Which is why using neural networks you can’t fully control or predict (also regarding point 2) is also problematic: if the system is fluid and unpredictable- how is it different from a human driver? For faster reaction we already have many assistance driving technology.

The only safe use I could see would be fixed programmed routes with no human drivers and no interactions with pedestrians and similar … which is basically a train: a fixed route with a fence around.

I have a problem taking your argument in good faith if you with one broad stroke dismiss serious discussion by experts in robotics, AI, software, ethics, law with “loss of control”.

There are also many people quite aware of what a boon it would be for elderly people, esp. in rural areas without infrastructure: paying for public transport with human drivers is too expensive, a driverless shuttle service would be extremely helpful.

Also, the young generation of city dwellers, about 40-50% don’t have a car, and many not even a license, because a car is expensive, city driving (and parking) a hassle, and public transport good (at least in Europe); yet there are occasions - moving stuff, a trip on the weekend - where using a car would be helpful, but not possible without a driver.

That doesn’t mean that people don’t have justified serious concerns. You dismissing that out of hand gives me the impression you are not interested in a full, honest discussion of facts.

People fear the exotic, but ignore the mundane. 3200 people die every day around the world in car accidents, but if one plane goes down out of millions of flights every few years killing 100 people, it has a million times the impact as far, far more car deaths. Often people swear off flying after a notable plane crash and then drive (putting themselves in more danger) instead.

We don’t notice coal plants dumping poisons into the air we breathe every minute of every day as a matter of their normal function, which gives millions of people early deaths every year, but once a decade you have a nuclear accident that kills a handful of people and people freak out and want it shut down in favor of far more dangerous methods of power generation.

A widespread adoption of automated cars might drive down the fatality rate 99% - a phenomenal accomplishment. But every single crash will be the subject of extensive news coverage, discussion, and fear mongering. People won’t notice the 99% of fatalties we saved before, because those are routine. The remaining 1% of deaths will be caused by something new and exotic. The backlash will prevent the adoption of fully automated networks (everything is much safer when all cars are automated) for at least a decade, possibly more, leading to a hundred times more fatalties, and everyone will feel safer for it.

If driverless cars are superior, it’s only a matter of time before manual driving is outlawed. And a lot of people like to drive for pleasure. And, no, they don’t want to be restricted to some go-cart style park.

There’s also the moral priorities thing. You’re trusting an AI to make decisions that a human previously would make. While the AI may have better reaction times, that doesn’t mean it would prioritize human life properly. Getting that handled by AI is a hard problem (in the computer algorithm sense). The AI will have to make this decision before the human has a chance to provide input–if the human is even watching in the first place–why would they when they have an AI chauffeur? People already don’t watch when other people drive them.

And, yes, hacking is an issue. Because there’s no way these cars won’t be getting automatic updates to improve the technology, as it clearly isn’t going to roll out 100% perfect. And that sort of thing can be hacked. Heck, updates are a problem without hacking, if you consider how updates are handled these days. Things go wrong. As much care as is being used now, people will get complacent over time, assuming nothing can go wrong. It’s a human constant.

Sure, hacking is currently a problem with OnStar, but that’s the reason I would never get a car with that installed. Just like I avoid any “Internet of Things” object. But, again, the fear with driverless cars is that, for them to have maximum benefit, you have to get the less predictable humans off the road fairly quickly. You have to outlaw human drivers.

Plus there’s just skepticism that the technology will really be ready when it rolls out. Is it really going to be able to drive on back country roads? It definitely isn’t now, but people keep acting like the autos will come quite soon.

The only way I see it working out is if we stop focusing on “driverless cars” and switch to “computer assisted driving cars.” Let the humans drive, but have the computer assist the driver once it realizes what the driver wants to do. And let the driver easily and instinctually override that decision–as in, they can put a little pressure on the steering wheel to tell it “No, don’t do that.”

And put out laws specifically to prevent the whole “outlawing human drivers” aspect, to put friction in the process.

It is annoying when you have people who put blind faith in a technology based on the numbers and no other factors. That’s exactly the type of mistake people are worried about. We as humans have to consider that sometimes the strictly better numbers are not the correct moral option.

Morality isn’t that simple.