Dagnabbit. I searched this thread for that news, but forgot to look outside to see if it had been covered. Ninja’d – again.

Have you seen other drivers? They’re constantly trying to kill themselves and everyone else on the road.
Motorists sometimes fail in their responsibilities of care, but they are overwhelmingly not trying to commit murder and suicide on the road. Perhaps try an argument strong enough to stand on its own without needing to lean on that kind of wild hyperbole.

Big corporations can be sued, often for a lot of money (which they have). They actually do have a reputation and shareholders to answer to. Unlike that drunk driver with lapsed insurance.
Which big corporations are currently in jail for killing people? What offense were they convicted of, what was their sentence, and where were they incarcerated?

At best, we’re going to have long haul interstate truck routes that are completely autonomous
Lord, I hope not. Things happen in an instant at highway speeds, I don’t want some form of electronic technology driving a huge vehicle that weighs tons and tons.
The whole self driving thing is silly and isn’t ever going to work. Humans can’t be replaced w/ something like that. There is this fantasy that people have that robotic things are going to make us safer and improve our lives. That is an idea that is not reality.
It’s more like a nutty, 50’s Sci-Fi/Fantasy story line. In real life, a robotic vehicle is going to end up killing scores of people when it malfunctions (and all mechanical/electronic things eventually malfunction) and mows down a lot of people on a street or drives you into a semi.

they are overwhelmingly not trying to commit murder and suicide on the road
They are not going out with the plan to rear end somebody today, but they are going out with the plan to scroll some Tik-Tok while they are stuck in traffic. I daily experience looking around to see the cars near me with the driver looking down.
Those are just the people not paying attention, and not even considering the other people who are driving too aggressively, not aggressively enough, at an improper speed for the conditions, intoxicated, dangerously sleepy, or medically unfit to drive.
Humans are terrible drivers. I still think that true self driving won’t happen without massive structural changes to transportation infrastructure, but there are lots of steps along the way that can greatly improve safety, and in many cases removing control from humans will improve safety.

Which big corporations are currently in jail for killing people? What offense were they convicted of, what was their sentence, and where were they incarcerated?
Nonsense like this suggests that you aren’t interested in serious discussion.
You can’t send a corporation to prison, but you can fine them, as well as imprison individual people involved. Volkswagen was fined >$30B for killing hundreds via the DieselGate scandal, and a few people went to prison. The punishment was enough to effect a change in behavior.

Nonsense like this suggests that you aren’t interested in serious discussion.
This type of bluster suggests you’re aware that your position is fundamentally weak, and the fact that you’ve now leaned on it twice suggests you’re keenly aware of its weakness.

You can’t send a corporation to prison, but you can fine them, as well as imprison individual people involved. Volkswagen was fined >$30B for killing hundreds via the DieselGate scandal, and a few people went to prison. The punishment was enough to effect a change in behavior.
So if my daughter is struck and killed by a faulty self-driving car operated by an inattentive driver, then my recourse is to wait a decade or so for regulation to catch up. Of course in the meantime I do have the option to liquidate my 401k in an attempt to extract damages out of a very large company, possessing an army of lawyers, backed by the richest man in the world (who also has vast messaging power by virtue of controlling one of the largest social media corporations).
Yeah, that sounds super-accountable, all right. Think I’d prefer the boring old criminal justice system rather than hoping legislators do their jobs (which they’ve heretofore shown little sign of doing).
There’s not a great deal of difference between your own gigantic faceless insurance company suing the other human driver’s gigantic faceless insurance company versus your own gigantic faceless insurance company suing the AI car’s manufacturer.
In either case you’re not personally doing the heavy lifting, they are. And in either case the results after costs will be pennies on the dollar, nothing compared to the value of a life damaged or destroyed, and delivered a decade later at best.
I personally would prefer a sort of anti-no-fault insurance system for AI cars. By law, if the AI car gets in an accident the AI car is presumed to have caused it and the manufacturer is automatically liable, period.
You’d still need your insurer to negotiate a settlement and enforce collection, but half the battle would be won before it began.

There’s not a great deal of difference between your own gigantic faceless insurance company suing the other human driver’s gigantic faceless insurance company versus your own gigantic faceless insurance company suing the AI car’s manufacturer.
People genrally have liability caps on their insurance. That’s all you can get out of the insurance company, then you have to go after the driver, who likely doesn’t have much.
Suing a company for negligence can get you huge settlements. Especially if punitive awards are made.

So if my daughter is struck and killed by a faulty self-driving car operated by an inattentive driver
Let’s look at three options:
- Your daughter is killed by a drunk driver. That (or an equivalent scenario) is the status quo for 38,000 people per year in the US alone.
- Your daughter is killed by a drunk driver ostensibly operating an L2 autonomous vehicle. You have the same options with the driver. But in addition, there is the possibility of suing the manufacturer, if it can be shown that they were somehow negligent. So it is no worse than option 1, and possibly much better.
- Your daughter is killed by an L5 autonomous vehicle with a drunk person as a passenger. The passenger is irrelevant since it is an L5 car expected to operate on its own. If it can be shown that there is a serious defect with the car, the manufacturer can be sued. But in any case, these accidents are far less likely since cars don’t get drunk.
Corporations have a lot more money than even individuals with insurance (and of course there’s no guarantee that people will have insurance).

Corporations have a lot more money than even individuals with insurance (and of course there’s no guarantee that people will have insurance).
The difference here is the function of the fee as justice and as a punitive deterrent. If my daughter is killed by a negligent driver, her life is irreplaceable. No amount of money can bring it back. Thus, anyone faced with a similar choice should face a similar irreplaceable loss (i.e. years of freedom from their own life).
Theoretically this deterrent ought to exist similarly for autonomous vehicle, but the deterrence function is weakened by some important factors:
- The driver is less careful because they believe the vehicle is taking care of safety. They may take risks that they ordinarily wouldn’t.
- The software is far from perfect; we know in its current state it’s highly flawed.
- The manufacturer’s motivation to perfect the car only extends to how much they think the deaths are going to cut into profits.
If we’re talking about some sort of fantasy where these cars are more reliable than human drivers, then this calculus changes. Theoretically they could surpass human drivers, one presumes, but that’s not demonstrably the case today, and again, manufacturers will only build the minimal quality needed to balance the profit equation, including penalties from lawsuits over deaths and fatalities. This is just how most companies are nowadays, but I particularly do not trust Musk’s Tesla to prioritize the well-being of the general public.
I guess this really comes down to which one believes is more likely to deter accidents - the sense of responsibility of an amoral corporation to whom human lives represent only a quantifiable lawsuit risk, or of an actual human who may lose years of their life from negligence. I do not sense the corporation will be adequately deterred.
I think it’s more likely that this will ultimately be determined by insurers. At some point they will make a determination whether autonomous vehicles will pay a lower, higher, or same premium for liability insurance, and then market forces should take care of it. But I hope that neither my family nor yours are the ones whose lives get expended while carmakers and insurers are figuring out exactly what’s needed to keep us safe.

If we’re talking about some sort of fantasy where these cars are more reliable than human drivers, then this calculus changes.
Of course that’s the assumption here. Again, more than 38 thousand people die per year in crashes in the US (actually it’s probably more than 45k now, due to a jump in the past couple of years). Almost all of those were preventable. They were caused by drunks, falling asleep, messing with their phones, running red lights, getting distracted by their kids, reckless driving, driving past what the conditions warrant, and so on and so forth. An autonomous system can be worse than a typical human on basic driving tasks, but just not do all the stupid preventable crap, and end up causing far fewer fatalities and injuries.

The difference here is the function of the fee as justice and as a punitive deterrent.
Drunk drivers aren’t doing that kind of calculus. We impose immense penalties on them on top of the ones that naturally arise from causing an accident and yet they still do it.
Companies do perform that financial calculus. The morality I leave you to decide, but the fact is that we already live in a world where both companies and the government put a price tag on human lives, and then act in a way that minimizes cost.

I think it’s more likely that this will ultimately be determined by insurers.
On that we agree. So far, cars with L2 features do not have exorbitant insurance rates.

But I hope that neither my family nor yours are the ones whose lives get expended while carmakers and insurers are figuring out exactly what’s needed to keep us safe.
There is no “keep”. The status quo is that we are not safe. It’s a dozen 9/11s every single year. I hope none of your loved ones are “expended” by a human driver because we decided to move too slowly on autonomous driving.

manufacturers will only build the minimal quality needed to balance the profit equation, including penalties from lawsuits over deaths and fatalities.
How about safety being a marketing selling point? Having the safest driving AI should commend additional sales and a price premium. I don’t know about you but I’d definitely be willing to pay more for that and it would weigh heavily on which car I’d buy.
As an example of corporate behavior that goes beyond minimal quality needed to balance the profit equation, Volvo is prized as particularly safe brand. It had the patent on the three-point safety belt and made it available to all car makers.

An autonomous system can be worse than a typical human on basic driving tasks, but just not do all the stupid preventable crap, and end up causing far fewer fatalities and injuries.
An autonomous system could, might be made safer, with the right investments made. I assume. But I don’t see any clear reason to expect those investments are going to happen. You can repeat “38,000 killed by drunk drivers” all you want, but what assurance can you offer that there won’t be 39,000 or 100,000 people killed by overreliance on flawed autonomous systems?
I don’t see any reason to say confidently one way or the other. I know that systems currently have a lot of flaws, and that new technology is improved by detecting mistakes and fixing them. How many people are you willing to kill to discover how to build a car that might or might not kill slightly fewer people than drunk drivers?

As an example of corporate behavior that goes beyond minimal quality needed to balance the profit equation, Volvo is prized as particularly safe brand. It had the patent on the three-point safety belt and made it available to all car makers.
It is, and you’ll notice that Volvos do not noticeably outnumber other cars on the road. The niche for purely self-protective safety is there but it’s obviously not overwhelming.
Do you notice what actually does predominate, though? Offensive safety. As in, I’m going to buy a truck so heavy that everyone dies in a collision except me, and tall enough that I can see everything on the road except a 3-foot person in the 6-foot space in front of my car.
Taken together, it doesn’t exactly suggest to me that there’s any appreciable consumer market for keeping other people safe.

Do you notice what actually does predominate, though? Offensive safety. As in, I’m going to buy a truck so heavy that everyone dies in a collision except me, and tall enough that I can see everything on the road except a 3-foot person in the 6-foot space in front of my car.
Excellent analysis. Thank you.
I’d add this cherry on your excellent Sundae
The fact nobody around me can see over / past / though my vehicle so I’m actively vandalizing their safety vs. all other traffic makes me warm and fuzzy all over.

An autonomous system could, might be made safer, with the right investments made. I assume. But I don’t see any clear reason to expect those investments are going to happen. You can repeat “38,000 killed by drunk drivers” all you want, but what assurance can you offer that there won’t be 39,000 or 100,000 people killed by overreliance on flawed autonomous systems?
First, because obviously these systems aren’t going to be deployed in large numbers if the safety is worse than human drivers. It’s abundantly clear that people overestimate novel risks and underestimate known ones, which means that people simply won’t accept a system that isn’t substantially better than a baseline human. Whether that means 2:1, 10:1, or 100:1, I couldn’t say, though 10:1 sounds about right to me.
Second, billions will continue to be poured into these systems because there is just so much at stake. People want to get in their car, press a button, and then take a nap or screw with their phone or whatever while they are driven to their destination. Even if somehow safety were not a factor, there would be this demand.

Do you notice what actually does predominate, though? Offensive safety.
While this is a good observation, it’s not analogous. There is a direct adversarial relationship between cars in relation to mass: the bigger car wins in a collision. And so there is a race to create ever more massive vehicles. Some of this is fake: giant SUVs tend to do worse in rollovers, and a fair number of accidents are single-vehicle, so the advantage isn’t as much as one might think, but it’s still there.
No such game-theory problem exists with self-driving cars. A car that can successfully bring its passengers to the destination is a safe car. This isn’t Mad Max where you can just attach a cowcatcher and plow through a row of cars. Short of breaking the law–which can be dealt with in courts if required–autonomous cars don’t have any means of externalizing their costs, at least in terms of safety (road use is a different story).
There is one thing I vaguely worry about. I suspect that a significant majority of people actually are above average drivers, and a minority trigger most of the situations that lead to accidents. Most of the time, defensive driving by the majority prevent these situations from developing into actual accidents. Sometimes this means not gunning it into the intersection because someone is running a red light. Sometimes it’s getting out of the way of someone that’s changing lanes into you. Sometimes it’s backing up because the person in front of you decided to do so and isn’t paying attention. And so on.
Autonomous vehicles may not have the kind of “intuition” that human drivers have. Even if they don’t make mistakes, there may be accidents that would not have otherwise happened because the systems aren’t defensive enough.
Put another way, a small subset of drivers are constantly injecting a large amount of chaos into the road system, but that chaos is mostly absorbed by the rest of the drivers. The result is still terrible, but it could be much worse.
Then again, a bunch of these situations can be caught by an autonomous car, like someone suddenly braking on a highway. The car will react instantly. The person may not.
Also, this problem goes away once the fraction of autonomous cars gets high enough. That will take a while, though, even in the optimistic scenarios.

…
Also, this problem goes away once the fraction of autonomous cars gets high enough
…
Excellent post overall. I’d add one thing to that tiny snip:
Also, this problem goes away once the fraction of autonomous cars gets high enough multiplied by how much the worst drivers are the early/earlier adopters of autonomous cars..
If most of the shitty heedless dangerous drivers are people with substance abuse problems, minor criminals, and/or people from the bottom of the SES scale they’ll be the last adopters, not the first.
So we may see lots of AI car adoption by the folks who don’t crash anyhow, while the hard core of frequent crashers remains stubbornly human-driving until very late in the decades-long transition. So we will not see the aggregate reduction in accidents over time that we might expect.
I do recall from my youth when I was a frequent flyer at a particular traffic school that there were some spectacularly bad drivers out there who majorly crashed at least annually and sometimes more like every couple of months. I was never able to really determine what was missing from them, but it was a persistent pattern.
Me? I just drove fast but didn’t crash. The intervening 40 years haven’t changed much.
Agreed with your addendum. And the problem may be even worse than that. Imagine you are the last person driving manually, that every other car has near-perfect accident-avoidance systems, and that you are an asshole. What are you going to do? Plow through every red light as if other cars don’t exist, of course! And drive on the wrong side of the road and a bunch of other things.
While perhaps we shouldn’t expect things to be taken to quite that extreme, there may well be a dynamic effect at work where the worst drivers behave even worse than before, essentially because they can get away with it.
If driverless cars only work with other driverless cars, they won’t work. In general, the strategy of trying to control the environment rather than have AI adapt to the environment will fail. There is just too much complexity. Even if all the cars are driverless, there’s nothing stopping a dog or a deer from running into the road, or a large piece of trash being blown into the road, or an obstacle like a blown semi tire winding up in your path, or whatever.
You can’t make every road in America a perfectly controlled environment, or even a perfectly predictable one So any AI that navigates the roads must be able to solve its own problems.
The other day I was driving the freeway and there was something on the road ahead of me. It was a carboard box. Normally. no big deal to run one over if you can’t swerve, but I noticed a pickup truck with a bunch of moving stuff piled in the bed pulled over maybe 1/8th mile ahead. So I slowed down and swerved around the box - and sure enough, it was a large tower computer and some other stuff. Hitting it at freeway speed would have been bad.
On the other hand, there is often trash in the road that you simply have to drive over, because it’s not safe to swerve. Ever had a plastic bag fly in front of your car close enough that you decide you have to just hit it? How would an AI handle that? How does it know the plastic bag is safe to hit? I can tell by noticing how it flies in the wind that it doesn’t have bricks or something else in it. An AI? I doubt it.
Assholes will presumably care about their insurance rates. If AI cars have fewer/lesser accidents, insuring them will be cheaper. Insurance companies may also be willing to offer lower deductibles if the driver had left the driving to the AI when an accident happened as a way to incentivize drivers to actually use AI driving.
Criminal liability may also be less likely if one left the driving to the AI, once the AI is good enough.
There might come a point where some some AI safety features are mandated by law. Making sure all cars on the road have AI that communicates to cooperate in driving safely is a collective action problem that regulation is made for. If we’ve mandated seat belts, helmets on bikes, no smoking indoors and masks, we can impose yet more considerateness on idiots and assholes.

If AI cars have fewer/lesser accidents, insuring them will be cheaper.
Repair costs of cars with autonomous sensors are substantially higher, because of the higher-cost replacement parts as well as the technical expertise/equipment needed to repair/recalibrate them. Hopefully that will come down over the long term, but it does offset some of the benefits of AVs.