Why the Popular Resistance to Driverless Cars?

And the hundreds of people and their passengers who drown when it turns out that driving into what they thought was a 4 inch puddle was really a two foot deep torrent that sweeps their vehicle away? They still feeling pretty good about the experience? Most flood deaths are from people who drown in their vehicles.

thing is - we’re still really, really early in the cycle of testing driverless cars.

have a look at the reporting back to the State of California DMV from the companies who are testing.

Google doing a lot of miles with a reduction to “only” 130 or so driver takeovers.

Tesla - I’m amazed they did so few miles (500 or so, in 4 cars) and a massive list of incidents in those miles (200?)

Bosch reported thousands of disengagement

My point is - sure, we’ll see driver aids where you can remove your hands from the wheel on freeways/long straight roads but cars without steering wheels - it’s a long way off yet

more complex inteactions between automobiles and pedestrians. Stranger, not often that a technical description made me laugh. Notwithstanding the typo. :wink:

I do think while technically sound what you suggest is going to be practically difficult to implement. Even if you start in Europe and N America, the fact is that places like the sub-continent and the Far East have large absolute numbers of middle and upper-middle-class people who can also see YouTube and learn about the latest. And these guys are going to want the new technology, yesterday, and have the money to get it. Which executive is going to be able to resist servicing 3/5th of the Planet’s population? It’s not the 1960’s that they are dirt poor and can be ignored. No matter if his technical people tell him that it’s not ready to be rolled out in those countries.

It’ll just take the first few accidents to convince everyone that the technology is flawed and this would have repercussions worldwide, possibly stalling it.

And FME, pedestrians are not that much of an issue, you just remain aware of them. Motorcyclists on the other hand…

On the other hand, this might spur investments in infrastructure in these countries, like they did in the Telecommunication sector.

:smack:
I forgot, us darkies are dumb dumbs. :rolleyes:

I think you’re expressing my point: 95% of humans think that they are above average in intellect, skill, moral superiority, etc., and will often make objectively poor decisions based on their own unrealistic assumptions about their capabilities. Personally, I think that driverless cars will have a far, far better objective understanding of its capabilities and limitations than the vast majority of drivers.

But I’m also reasonably confident that we should not have much too confidence in driverless cars for at least 5, maybe 10, possiblly more years. I think Elon Musk and his ilk are full of shit when they get people hyped up about this technology being literally around the corner.

Huh?

WTF?

To make it clearer, you seem to think that only human intelligences can deal with crowded traffic and pedestrians.
A comment in your last post though does illustrate how the resistance to autonomous vehicles will play out, virtually no matter how good they become.

No matter how good the technology becomes there will be accidents. Even if some tiny fraction of those caused by non-impaired humans there will a first few accidents that will be determined to to the “fault” of the autonomous vehicle.

A new technology will not be held by those resistant to it to being just better than what currently exists, or many times better, it will be condemned by them if it is not perfect. Reporting rate of accidents and traffic fatalities per total vehicles miles driven demonstrating superiority won’t be enough: someone died and the vehicle was “at fault” will be a strong emotional argument.

(ETA: I see someone playing the race card above. Let me be clear, the driving here in China is terrible because the test is fairly easy and the culture is still too accepting of dangerous behaviour (as until recently we still were in the West). It’s not genetics. :rolleyes:)

Agreed, although there are a number of difficulties with sharing the road with such terrible drivers:

  1. The self-drive cars will be involved in more accidents no matter how safely they drive. People will plough into the back of them at traffic lights for example. This may affect uptake in safer countries if nothing else.

  2. There are a lot of “de facto” rules / “cheating” (can’t remember the proper expression for this) in both these countries. If the software includes these rules, it opens up the manufacturer to lawsuits. If it does not, it may confuse other drivers.

Not saying this won’t be the first place, just that there are pros and cons.

Also realize that some features that will be inherent in autonomous vehicles, such as collision avoidance and advanced navigation will also make their way into manually operated vehicles as well as driver assistance, further reducing accidents even for people who continue to drive manually. Accidents will still occur, of course, but the rate of accidents will drop dramatically, just as a comparison of accident rates between the highly automated and assisted commercial aircraft and private small prop and jet planes with minimal instrumentation show dramatic differences.

Stranger

With aircraft electronics, you can charge millions for an airliner cockpit worth. But a bit of googling and a napkin estimate says there ~5000 (I’m rounding up a little) large airliners total. That’s all. Including plenty of old models with 30 year old electronics that have not been updated.

260 million passenger cars in the USA, and they are 11.6 years old on average. If there are just as many in all of Europe, 520 million, that’s your market. It’s 104,000 times larger.

If you get 1 million dollars in revenue that goes to the firm that actually does the engineering work, per airliner sold, in order for them to continue developing reliable aircraft autopilots and cockpit glass, you could pay a license fee of just $10 per car in the long run for the equivalent revenue. I would expect the license fees to be more on the order of $3000 to $12,000 paid over the vehicles lifetime. (I think a subscription model makes sense - and you will have to pay the manufacturer for the liability coverage as well)

That’s the reason this problem is being solved. There is a colossal amount of money there. This also means there is the money to develop sophisticated levels of redundancy.

You can’t afford the quality of electronics that go into an aircraft, but you can have more cheaper boards. Redundant computer systems, similar to what they use in rockets. I’m thinking you’re going to need dual electrical buses, dual servos, and redundant sensor coverage. Half the sensors would be wired to the second electrical bus.

Hmm, a car company (Hi VW) recently was found to have put software into their cars that fudged the emissions of their cars. Why? Because it was easier and cheaper than actually building cars that met the required standards. What would possibly lead me t believe that the same industry wouldn’t do something similar when it comes to a self-driving car. “Hey, we found a way to save billions of dollars, and it will only slightly increase the accident rate. No one will ever know.”

That’s fine to a point. If the car has 1/10 the total accidents but it isn’t 1/15 the total accidents because they didn’t pump billions more improving it, that’s still an immense improvement. Tesla is openly making a compromise like that. They are not making it a secret. Their autonomy systems lack lidar sensors (because they are too expensive) and inherently, stereo depth perception and ultrasonic sensors and radar are not as good in certain situations. There is no way to make them as good as lidar, and thus, there will be additional accidents in autonomous teslas that would not occur in a model equipped with LIDAR.

That doesn’t mean we shouldn’t adopt them - again, it doesn’t need to be perfect, just lots and lots better than humans, who don’t have lidar, either.

I hope there are published accident rates per million miles, so that the safer models are known, and it should really be disclosed on the window sticker next to the mileage.

Um, how?? AFAIK, the only incoming signals my car (2016 Civic, if it matters) recognizes are AM and FM radio, and my car’s sound system is separate from its mechanical systems.

But that would become an issue if V2V communication is part of the autonomous vehicle safety package. Autonomous vehicles, yes; V2V, no.

The difference is, VW was using software to do an end run around having to put in a lot more emissions-control hardware. And software’s a lot cheaper.

But with autonomous vehicles, it’ll just be software. If the existing software produces a given level of safety and reliability, they won’t save a penny by switching to software that fails to do that.

Once self-driving software is superior to human brainware, the incentives are all on the side of continuous improvement. The hard part is engineering the software to be able to recognize and respond to all the stuff that we do without a second thought; that’s the part that makes me wonder whether self-driving cars will get here before I have to turn in my keys.

Perhaps we can return this discussion to what I read as the intent of of the op, which I will attempt to reframe:

Assume as a hypothetical that autonomous vehicles are on solid evidence significantly safer than human driven vehicles for, say 99% of all drivers. Would there still be resistance to them and some who would deny the evidence, just as there are those who prefer “alternative facts” for other items that go against what they want to believe, and if so, then what would be the root causes of such resistance?

That’s not exactly correct. Software development can be expensive and fraught with delays too. And it’s not all software, it’s sensors and computing power. Surely you’ve encountered software that was buggy or wasn’t compatible with all devices. You seem to imply that eventually there’ll be some standardized self driving car car software that manufacturers will be able to download into every new model. I’m not sure that is a safe bet.

Trust

I pointed out in another “car of the future” thread - there will be places where the car’s abilities are useless - the flooded culvert - the car can’t know that it is perfectly safe to cross - it floods like this every spring - and the human will have to drive it.

It is equally likely that a ‘normal’ situation turns bad - and the car (which would normally handle this time/place) cannot be trusted.
The Florida crash established that a straight and dry road can still kill you if neither the operator or the car notice a white trailer against a clear, bright sky.

That car “thought” it was doing a fine job. An alert human would have known to disengage the “Autopilot” and brake.

That “it might not always work” thought will prevent a instant acceptance of the technology

It is a fair question. One of the open questions on driverless cars is who is legally responsible for the mistakes made by the car. If it is the company that could be civilly or criminally responsible that would be a good incentive. I would like to believe that if a company knowing cuts corners to save money and it kills people that a high ranking corporate officer and maybe some subordinates would go to jail (Ha! I know I’m dreaming).

What keeps software companies in the safety industry from cutting corners right now? There’s been some well documented safety critical system failures and I can’t think of any that occurred due to purposeful negligence on the part of the developer.

That is not an open question, it has been conclusively answered. The manufacturer is always liable. Why You Shouldn’t Worry About Liability for Self-Driving Car Accidents - IEEE Spectrum

There is a serious issue this creates, and I do not know if it will be addressed. Self driving cars occupy a similar niche to vaccines. Even if the technology caps out at 1/10 the accident rate of a human driver (suppose you can’t make it any better without genuine machine sentience) - it would still clearly be worthwhile.

Vaccines are not 100% safe. They are mostly safe for most people, but there is a real chance you will die if you receive one, especially for young children who may have faulty immune systems and so they will not survive their first vaccination. It’s not a very large chance, but the risk is there, and so to protect the manufacturers from otherwise limitless liability - a jury could theoretically award of a verdict of a trillion dollars if a child dies - there’s a panel that will award fixed dollar amounts for someone injured by vaccine.

That’s what autonomous cars need. There needs to be a fixed payout (indexed to inflation) if you get hurt or killed by one, assuming the manufacturer isn’t deliberately negligent. The reason for this is society benefits - even if the autonomous vehicles did kill 1/10 the people, that’s a 90% reduction.

This has other consequences, though. One thing you can do is reduce the risk. You could make the consumer model autonomous cars refuse to depart unless the occupants are all in 4 point restraints, similar to that of racecar drivers. They could be programmed to be especially cautious around intersections, where T-bone crashes are frequently fatal, and to always obey the speed limit and stay away from cars driving erratically. On 4 lane highways with no divider, they could prefer the right hand lane to avoid the possibility of a head on collision by cars from the other lane. They could simply refuse to even consider routes that take the autonomous vehicle onto winding mountain roads and other unsafe roadways - you would be required to take over. And so on.

I never mentioned hacking, so I don’t know why you’re so eager to argue with me that what I mentioned was not caused by hacking. I was responding to the following claim by Chronos:

I suppose, but your claim is incorrect. The issue isn’t with the computer planning the movement, that’s easy. The problems are :

a. The computer doesn’t correctly integrate the reams of noisy data into a coherent, trustworthy world model. There will be noise. LIDAR sensor blips that indicate something right in front of the car it is about to hit. Leaves and rain and fog and odd lighting changing how a traffic sign looks. Missing road dividers.

b. Software bugs. The computer’s model is correct, and the general calculations it uses on that model to guarantee it never hits anything are mostly correct, but there are certain combinations of data that a programming error causes it to make the wrong choice.