Why the Popular Resistance to Driverless Cars?

The problem is not just what legislation would need to look like: it’s that development and marketing hype is advancing, while legislatures are not doing anything to adress it. Where are the expert panels tasked with coming up with a new law draft in two years? What will stop a car manufacturer a few years from now to sell a car with promises (and buried sub-clauses absolving himself of all culpability) since the current law don’t adress this?

I didn’t say it was. But the fact is that people get uncomfortable to be confronted with making a decision on which life is more valuable. When buying a normal car, people don’t have to think about. When buying a driverless car, they have to wonder what algorithm is programmed into it. (Most likely, “protect the life of the driver first”) This will make the other traffic participants uneasy.

The problem isn’t if accidents are reduced by 99% compared to today. It’s that promises are made to the public of 0 accidents because computers are so much better - when people know from daily experience that computers can and do fail; and when experts in accident theory will tell you that accidents can never be avoided, even with automated systems. To believe otherwise shows a lack of knowledge of accident theory and computers both.

No, it’s discussing the current state of carless drivers in order to draft legislation about the software that gets into the next generation. We can’t very well draft laws about the next generation because we don’t know what they are capable of. We know from similar problems that e.g. an AI was promised to be just around the corner since the 50s, yet robots still have problems walking on rough terrain. The OP and others are envisiong perfect driverless cars, which are nowhere possible, given the problems of the current generation, and the inherent problems of any perfect technology.

And at one point, driverless cars get out on public roads - as they are already doing - but it’s not the promised perfect generation. And the laws aren’t ready because they aren’t perfect yet; but they can’t learn without real life. So what then?

You haven’t introduced any “discussion by experts in robotics, AI, software, ethics, law” with your prior post; you speculated on the distribution of liability (a consideration already well addressed by liability law), speculated about some hypothetical conundrum without applying a similar circumstance to the credible response of a human driver, and alluded to unspecified “tests” which, even if they exist apply only to the currently primitively state of the art of autonomous driving systems and not to more robust future systems developed from experience in testing and simulation of real world situations, notwithstanding completely ignoring the fact that autonomous systems will have vastly more situationally awareness and constant attention than even the best human driver is able to apply. Don’t piss at me about not supporting “a full, honest discussion of the facts” when you want to couch the discussion in vague and unsupported terms.

Stranger

Can we at least wait until a real “Driverless Car” exists before saying how it will act, or the level of effort needed to 'hack" (old word: Compromise) it?

Maybe the control “computer” is in a Faraday Cage impenetrable by any radio frequency. What about the Lidar unit on the roof? Could a kid with a mirror blind it? Who wants to be first to find out how that works in the real world? How about the braking system? Is it all “Drive by wire”? How about inducing a current in those wires with a large electro-magnet?
Until there are at least 10,000 of these thing running around the US (how many Teslas have been built? How many have you seen on the road?), nobody but the very first owners will know enough about how all those wonderful “it will solve all our problems” ideas of current proponents actually get implemented - if they get implemented at all.

Or, what Stranger said…

I like driving. I enjoy being behind the wheel. Travel time goes by swiftly.

I hate being a passenger in a vehicle. Travel time drags.

YMMV.

Yes, I didn’t cite the different articles I read at very different times in the past, because it would take me hours to find them again. But then, neither did you give cites?

And where’s your support that at some unspecified point in time autonomous system will be vastly superior, given that they currently have big problems coping with just driving on straight roads in good weather and with clear markings?

Yes, human drivers are currently not held to the same standard. But human drivers are currently the only alternative to problematic driverless cars. The perfect driverless cars you promise are not here.

I didn’t piss at you, I pointed out that your blanket assumption of all concerns as “loss of control” point towards a problematic attitude. If you believe that one day, perfect driverless cars will be better than humans, then we can resume the debate at that point. (I also fail to see how your promises of perfect systems one day is less vague or more supported than my collations of different articles which I didn’t list individually).

If however you want to know why people worry now, the problems of the current cars and the ethical problems raised by the less-than-perfect current generation are relevant, so dismissing them with promises is not helpful for a honest discussion. (And lets me know if it’s worth trying to dig up the articles or not. After all, the experts are discussing the current generation.)

And how do we deal with the laws in the meantime? Currently, the law requires a human driver who can take control (and still more than one car crashed). In order to learn, and in order to prove there will be no accidents, proponents will want a truly driverless car - without any interference (human student drivers want a special car with driving instructor brake pedal for the first few hours, but they need a normal car later, because if they depend on the instructor interfereing at the last moment, they will not pay enough attention. Presumably, learning software has a similar problem.)

So unapproved cars would be driving thousands of hours and miles on normal roads to prove they are safe. That might be a problem.

No, for me to be happy in an autonomous car, it doesn’t need to be better than us, it needs to be better than me.

Car crashes aren’t distributed randomly. A substantial percentage of fatal crashes come from reckless behavior: drunk driving, extreme speeding, racing, highly distracted driving, driving while senile. I don’t pretend that I’m the world’s best driver, but I don’t do those things. If you remove a small percent of the most irresponsible drivers, the fatality rate would go way down. The risk level for a median driver (in terms of skill and prudence) is probably a lot lower than the risk for a mean driver.

If a driverless car comes along that has a slightly lower fatality rate than the total population of humans, it would be a step down in safety for me. I don’t want to be mandated to increase my risk in order to decrease the risk of other people.

I also am really skeptical that a truly autonomous car will exist any time soon. I’m really skeptical that they will be intelligent enough to deal with all sorts of unusual situations that come up. Can they deal with damaged roads, flooding, bad road markings, accepting hand signals from police, etc. Can a car decide whether it’s worth it to swerve in order to avoid a huge chunk of rock in the road, but decide to drive on through if it’s a cardboard box? Can it tell the difference between a child in the road and an animal and act accordingly? (Hopefully taking small risks to avoid the child but not taking risks to avoid an animal?) Will cars occasionally have glitches in their sensors and image-recognition algorithms and randomly drive off a cliff or into a wall?

The hacking issue concerns me a lot. Given how extremely common hacking is today, with lots of big companies being hacked, I’m skeptical that cars would be very secure, and even if they’re fairly secure and the risk is low, the consequences could be catastrophic. What if someone (foreign government, disgruntled Toyota employee, etc) were to hack every example of a popular car (or car-OS), and command several million cars to crash at high speed at the same time? The casualties could exceed those of one nuclear bomb.

Okay, quick search:
This Autonomes Fahren: Moral Machine - Gewissensfragen zu Leben und Tod - DER SPIEGEL is a discussion on different scenarios with the “Moral Machine”

This Autonomes Fahren: Horrorszenarien sind unwahrscheinlich - DER SPIEGEL is a talk with a programming expert. They emphasize however that autonomous cars are best on the highway - straight lines, predicatble traffic, no crossings or pedestrians - but that the vast majority of accidents happen in cities.

This Das Auto ohne Lenkrad | Abendzeitung München is a discussion about the legal problem of autnomous cars.

This Ethische Prinzipien für autonome Autos is from a philosophy professor about the ethics, and how the German principle of “All lives are of equal value, there is never to be an accounting between lives” might give way to the Anglo-saxon principle of “lesser evil of evaulating human lifes” (the trolley problem).

You’ve lambasted me for not providing cites (For what? That technology will continue to improve in the future?), accused me of “dismiss[ing] serious discussion by experts in robotics, AI, software, ethics, law” even though you didn’t present any such discussion, and claiming that I’m comparing “perfect driverless cars” to human drivers even though that is a term I have not used either explicitly or by implication, and then comparing some unspecified “tests” as evidence that autonomous vehicles cannot (and presumably will not) ever be able to account for the vagaries of human drivers. In fact, you seem to be framing the entire discussion in vague, unsubstantiated, and confrontational terms for reasons I can only guess at and have little interest in addressing further.

Stranger

Rick Sanchez :

OP: you have posted a rigged and biased question here. You have purposely allowed only TWO possible answers. Either you are right, or everyone else is defective.

If you are actually going to ask an honest question and have a truly useful debate about anything, you really need to put more effort in to framing the discussion equitably and fairly.

Worst of all in my thinking, is that you want everyone to choose between refusing the new thing altogether, or to accept a lot of very real shortcomings of the thing.

I am an Historian and a Repair technician. From that dual point of view, I choose to discard both of the choices you are offering, and instead declare that we need to continue working on the possibility of such cars, and make sure that all worries are addressed going forward.

Finally, I have already witnessed technological marvels of a time, which many thought would be THE solution to various problems in our world, eventually be superseded by something else entirely, before the anticipated benefits even began to show up. Airplanes. Robots. Segways. Moving sidewalks. High speed monorails. Etc.

My computer/internet glitches up all the time. I don’t want to be in the oncoming lane when a driver-less car glitches.

Hackers always go after the weakest link in a system first. Do you know how most computer hacks work? The hacker hacks a human, because humans are far easier to hack than computers are.

The fact that people even ask this question is proof that computer-driven cars are superior to human-driven ones. The right answer is obviously none of the above: The right answer is to hit none of the humans. You only think the question is an important one because, to a human, hitting nobody at all is really hard. But to a computer, it’s easy.

I agree with this. And not because I’m worried the car will crash. If mainstream media companies can convince Google to block their competitors in Youtube’s restricted mode while their own channels go unmolested, I don’t want Google having the final say on where I am and am not allowed to drive.

“Open the pod bay doors, HAL.” Or did these people not see “Terminator”?

Well, if you had seen the very long ASU Origins talk about Artificial Intelligence (where I got pointed to the work being done with high-assurance software) you would know that yes they did, and yes, they know about the hacking and that there are ways to deal with it.

One note : I think most of the ethical dilemmas are made up by philosophers wanting attention.

Here’s a practical operating law for an autonomous car : protect the occupant from collisions. When a collision is unavoidable, choose the collision that has the least energy at impact with the outside world. The actual software models the possible future states the car could end up in, based on permutations of available control inputs.

The car will not know or care what it is choosing to hit. (just, based on LIDAR and approximate matching with previously seen objects through the cameras, whether the object is solid or not and approximately how solid it is) If there’s a school bus next to an 18 wheeler, and the schoolbus is slightly farther away (less energy at impact), it’s going to choose the schoolbus. This, to me, is a solid operating law and it’s in most situations going to result in the least deaths. It certainly is going to protect the occupant who paid for the car over all else, and in most cases, since collisions are 2 sided, a lower energy collision is better for other people as well.

Of course, these unavoidable collisions would be uncommon, but when it gets in that situation, that’s the “machine ethics” that matters.

Ha! :smiley:

I had to laugh in agreement because I also remember that Lawrence Krauss (that hosted the AI talk) also does think like that about philosophers. Specially the ones that ignore what science and researchers are currently finding.

If it’s easy, why do they keep having problems? Uber’s self-driving cars couldn’t make safe right turns with pedestrians or cyclists present.

Uber’s self-driving cars also go less than one mile per human intervention.

Tesla’s autopilot had problems with salt lines prior to a snow in the mid-Atlantic.

It may someday be easy for a computer, but not yet.

The problems you mention are not caused by hackers. Hacking isn’t a magical art. It is only possible if security flaws exist in the target system and you have a means of communicating with that system and the messages can change the behavior of that system. So you *could *make autonomous cars essentially hacker-proof through air gap isolation. That would stop anyone without physical access - and you can sabotage a car today (or just attach a bomb to it) if you have access to it, quite easily, but few people worry about it.

Realistically, instead of air gap isolation, you would isolate the key systems that plan the car’s movement and avoid obstacles, and then use formally proven software -> like mentioned above -> in communication processors that would filter any messages sent from the side of the car’s computer system that is connected to the network. Obviously, the car would need mapping data and data transmitted by other, possibly untrustworthy, vehicles. So you would need to filter these messages to ensure they are valid and do not contain any rule violating components, such as excessive length that could cause a buffer overflow.

Another trick is to use processors that isolate data and code. This prevents a buffer overflow from even being possible. Commonly used embedded processors, used in everything from motor controllers to missiles have this feature.