Self driving cars are still decades away

That’s one requirement. However, in San Francisco, one self-driving vehicle stopped on a street preventing an ambulance from reaching a patient. I think there was another incident (perhaps more than one?) in which several self-driving cars collected on a street, blocking traffic. So, no one was killed but the self-driving cars still caused problems of a type that most humans would not.

As if hundreds of idiot drivers don’t do things like this every single day…

At least in those cases, you can yell at the idiot driver to get out of the way of the ambulance. Who do you yell at when a self-driving car stops in traffic?

True.

Although when two idiot drivers collide, that often creates a mess that can’t be moved and so blocks traffic even as the crashed drivers would love to be able to move on. Odds are vastly in favor that if either or both of those vehicles had been computer-driven, the crash would not have occurred. And so the blockage would never have occurred either.


As has been said umpteen times in the thread by many, many posters using different words, but always the same ideas:

  • We as individuals and society as a whole will pay undue attention to the obvious ways in which self-driving cars make different mistakes than humans would. Despite the computers making fewer mistakes overall.

  • We as individuals and society as a whole will pay almost no attention to the not-obvious ways in which self-driving cars avoid many of the highly consequential mistakes humans make every day in all their teeming millions.

You tell me! Here I am living in a normal midwestern city and I can’t seem to figure out how to hail a robo taxi. Guess you’ve gotta lump me in with the Himalayan hermits.

You’ve got the technology and infrastructure you need. You may lack local entrepeneurs or local political will or commercial viability in your specific community. But technology and infrastructure are probably adequate in your normal midwestern city.

I don’t understand - are you people actually FOR the development of truly “full self-driving cars” on public roads? Because that last thing is the part that is the awful idea. It’s the ultimate in “testing in production”.

All the people saying “sure the robot ran a red light / hit a pedestrian or motorcyclist / stopped for no obvious reason and caused a pile-up behind it”, etc., “but on average by mile it’s far safer than your average human driver!” are entirely missing the point.

The tech companies pushing for this angle are making it seem like an engineering problem, and that as long as we reduce accidents and increase road safety (by various metrics), it’s a net gain for society, and that these are necessary road bumps along the way.

What is being lost is the responsibility for problems. It’s not the “how many and how badly” but the “how, and who’s responsible”.

If a pedestrian or biker is hit by a driver, for example, the driver is at falut.

If a car in “self-driving” mode hits the same person in the same way, … well, right now, the “nominal human” driver is also at fault, left high and dry for letting the car drive itself and not paying attention with hands ready on the wheel.

Which is kind of why these people put it in that mode in the first place, right?

So, that’s nice for Tesla or whatever car/tech company. You can buy our car, test our beta software with tons of liability on the line, you’re on the hook for that BTW, but give us the accident data so we can make the product better next time around.

And from the victim’s POV, I really don’t care that this “glitch” or “bug” or “untested corner case scenario” will be far less likely to happen in the next iteration, to someone else. It happened to ME. And if it was a case of something that a human would really just about never do, well I also don’t really care about “all those other scenarios where this same tech was way safer than humans over a large data sample!”, because MY data sample is the one I’m living.

This is a good point, but it goes both ways. There is somewhere a victim who was killed because the driver didn’t engage the self-driving system and the driver did something that self-driving cars almost never do. Never mind all the other cases where humans drive more safely than automated cars, in that victim’s case the human driver killed the victim in a way automation never would. Being killed by a human is not better than being killed by automation.

Well, a human made the automation. So they’re being killed by a human either way, just less directly.

Not for the car company. They are not liable for the human. They ARE liable for the automation, potentially.

This goes to my point about a lack of rational minds in the populace. When does the software exit beta and become production? I think it’s pretty obvious that when robots are statistically safer than human drivers, we are in production.

Now, maybe you’re arguing that manufacturers don’t have enough skin in the game to keep making improvements beyond the bare minimum to maintain that “better than humans” status. Fair enough, that’s certainly an area for policymaking, litigation and insurance regulation. But I’m pretty sure insurance companies have a strong motivation to keep downward pressure on error rates.

But if your concern is really “who is to blame on a case by case basis” and that’s what causes us to live with more mayhem and death than necessary, I believe you. Society is often stupid in this way.

The Austrailians have an interesting approach to motor vehicle accidents and liability. I may not have it exactly right but as I understand it, it’s simple:

The government pays all medical expenses and car repair expenses for any accidents. They tax the public and the vehicle manufacturers to pay for this. “Fault” simply does not exist. Having an accident and the broke uninsured counterparty never pays simeply doesn’t exist.

Imagine we started that system in the USA for self-driving, and only self-driving, cars. Manufacturers pay into a fund knowing the fund will pay out 100% on every accident with no legal BS trying to evade fault. The car crashes, the fund pays. It aligns the harm-reduction incentives with the harm causers.

This is my complaint. They are doing their best to wash their hands of liability particularly Tesla.

The question isn’t really if automation will be (or is now) safer - we all know the answer to that will be yes, and might even be yes today depending on how you evaluate the claim. But who has responsibility, liability, and consent during the testing phase? If AVs are truly already safer, then shouldn’t Tesla be prepared to accept liability when it doesn’t work?

When a new product is being developed, typically the company is responsible for the testing. If something goes wrong, they are liable. And people give consent to participate in the testing. That isn’t happening with most of the autonomous vehicle testing. With Tesla FSD, users have responsibility for any failures of the automation because they are supposed to be in control at all times. Pedestrians and other drivers are participating in the testing program without consent.

Testing this like other new products would be more expensive and take longer, and it might cut into Tesla’s profit. And as something that is in the public interest, some part of the testing should be funded by the government. But right now, the risk/reward is too heavily weighted to the manufacturers

Ehh, I’d only be behind that if the Government open sourced it. But I’m zealot that way.

How much longer? A thousand deaths? 30,000? 100,000?

We have a… thing… let’s call it Thing 1 that’s been killing about 100 Americans a day since 1960. Companies are testing another… thing… Thing 2, that has the potential to cut those deaths by a significant percentage, however, while testing it and improving it, Thing 2 may add a fraction of a percentage to the deaths already caused by Thing 1.

I’m supposed to be concerned about that fraction of a percentage?

Could that percentage of Thing 2 be even lower with the right process and incentives? Then yes, we should be concerned about it.

Most cars already have automation. The driver is supposed to be in control all of the time, even when using those automations. Cruise control has existed since the 40s. Drivers have always been required to pay attention, and have always been the ones in control.

I fail to see why when my car with a dumb cruise control plows into something it is my fault, but when my car with a spicy cruise control plows into something the fault needs to be shared between me and the manufacturer.

Yes, there are lots of human factor issues around having a new kind of cruise control that slows down for 99.9% of things, and plows into 0.1% (instead of plowing into 100%). Something being hard does not absolve the driver of responsibility. There has always been highway hypnosis, distractions, and other human factors related to driving. Getting automation that avoids those human limitations is the ultimate goal, but it is still at a level where a human needs to remain in charge.

Anyone not able to be responsible for everything their car does, has no business being behind the wheel.

And the Thing 2 deaths aren’t added on top of the Thing 1 deaths, because Thing 2 is displacing Thing 1. So now we have 90 Thing 1 deaths a day, and 1 Thing 2 deaths per day.

Or at least, that is how it is supposed to work. If it is not working that way, then Thing 2 does need to be reevaluated.

You could cut Thing 2 deaths in half for a full year and save fewer lives than are lost in a single morning rush hour.

This is identical to pumping the brakes on a vaccine rollout because of completely valid risks that some small number of people will get sickened by the vaccine. The issue is that we are looking at multiple orders of magnitude more deaths via the status quo than the testing.

Back when this thread started, automobile deaths were on the decline, but post Covid, drivers have become raging impatient assholes. We need to get people out from behind the wheel, the faster the better.

There is no evidence that self driving cars are safer than humans. Self driving cars may be safer than humans in the controlled environments in which they are allowed to drive, on dry roads, at low speeds, sometimes on pre-mapped roads.

We don’t know if humans are safer than AI until self-driving cars have to drive in all weather and road conditions, handle all sorts of road closures, unmarked hazards, poor visibility, etc.

In any event, even if they are statistically safer that doesn’t mean that there won’t be negligence or other errors that cause harm. And people will seek retribution or compensation. To whom do they turn? If people cause 30,000 accidents, that’s 30,000 people and their insurance companies who have to share the costs. But if no one is driving that will fall to the manufacturer, who would quickly be buried under lawsuits.

I am not a fan of no-fault insurance laws. I think if someone is reckless or otherwise at fault and hits someone else, they should pay and their insurance should go up accordingly. Removing the incentives to be a safe driver doesn’t make sense to me.

The risk of insurance increases can be a bigger dicinsentive to careless driving than traffic laws for many people.

Just following the trend of people becoming raging impatient assholes.
:cry: