By “work”, I mean in a technical sense. As in, this algorithm is simple and smooth enough that the car following it is unlikely to do something really stupid or unexpected as a consequence.
True but GPUs are advancing fast (and remember real autonomous cars don’t actually exist yet). I am sure the lawyers are analyzing hypothetical computer vision technology in X years time when autonomous cars hit the roads, not the current technology.
Said every programmer every time he put something into production. ![]()
Regards,
Shodan
This is its own legal conundrum unrelated to trolley problems. You are programming a car to deliberately break the law. Does that open you up legal liability either directly (is it illegal to conspire to break traffic laws in any jurisdiction ?) or indirectly (can you get sued when your customer gets a ticket for crossing that double white line?).
You keep turning the question to one of an algorithm. But that doesn’t answer he question: what should humans do?
Well, our system of torts isn’t exactly based on increasing guilt on those who do wrong; it’s based on penalizing wrongdoers’ wallets. And the ideal is that more serious wrongs cost more, which is essentially a judgment made by judges and juries. If anything, the systems of civil claims probably provides a better starting point for this debate, at least as compared to navel-gazing and fretting over whether it is worse if a little girl is killed by a drunk driver or by one of Elon Musk’s corporate software department.
You all seem rather confident in the ability of the programmers of these machines to create self-driving cars that will accurately execute their intended algorithm 100% of the time. Any major piece of programming contains bugs.
Less than a year ago, Uber conducted a short-lived test of self-driving cars in San Francisco. Their cars were seen taking illegal “hook” right turns across bike lanes. Presumably, this particular issue has since been corrected. But in my view this should be seen as illustrative, not an anomaly. If they can’t get something this basic right out of the gate (it’s in the state’s traffic code), I’m not so sure they’ll get the more nuanced issues smoothed out with a bit more testing.
What happens when the CPU overheats? Or there’s a memory leak? Or a sensor malfunctions? Or the program misinterprets an ambiguous sensor reading? Or it just plain glitches out and acts in an unexpected way?
What should happen if someone dies due to a self-driving car’s software bug?
Well, yeah. Not 100% but very close. Waymo (google) is at about 99.999%. And that’s 2016 data, they are so confident in their improvements made since then that they are now sending autonomous vehicles out without a safety driver.
What happens if your airbags detonate because of a bad sensor? What happens if your airbag cartridges have metal debris in them and you get into a minor crash? What happens if your brake lines rupture? What happens if a firmware bug in your ECU causes a sudden burst of unintended acceleration?
Driving is dangerous, and autonomous cars won’t be perfectly safe. The hope is that they will be a lot safer, however.
Same thing as when any other piece of software fails. We’ve been handling legal results of failures of important computer systems for decades (we were very nearly all wiped out because of one)
This is different to a computer system operating exactly as intended, and then choosing to kill someone.
When someone is hurt, well, there’s going to be a recording of the whole incident on the vehicle’s flash storage. And the matter will be settled either in arbitration or in court. Either way, the company will pay something. How much they pay will be variable depending on who the victims were, how good their attorneys were, random chance, and so on. The company will carry an insurance policy against this, the same way a trucking firm or cab firm carries insurance - though large companies will mainly self-insure as this is cheaper.
They will pass the average monthly cost of these payouts down to the autonomous vehicle renters/owners. You will have to pay a monthly or yearly subscription fee to have an autonomous vehicle. The main part of that fee will be the insurance, and the rest will cover the constant software updates and map updates needed. It’s possible that some manufacturers won’t even sell autonomous vehicles at all, they’ll just rent them. Most individuals probably won’t want to pay for their own autonomous vehicle because this will not be cheap : it makes much more sense to send it out into a pool and have it collect revenue like a taxi.
You’ll at least be able to play back the recording in court and show that the vehicle had to choose between killing someone outside the car or inside the car, and thus why it choose the path it did. Manufacturer would still have to pay either way, but if the decision made by the car was clearly sensible, it would mean the judge/jurors will probably not be so angry that they assign excessive punitive damages.
The link I gave is one where the jury assessed a 4.9 billion judgement against GM, who…well…have a long history of putting substandard equipment into cars that kill dozens of people.
I think you’re too sanguine that the road is always predictable. A tire blowing out on a nearby car or something flying off a truck into the road can create unexpected dangers, for instance.
Or just consider the classic example of a child running out from between parked cars. Hard braking alone carries some risk of hitting the child. Swerving into the next lane over reduces the risk of hitting the child, but increases the risk of hitting another car. If it’s a four-lane road, traffic in the next lane over is moving the same direction as you, and an accident might be minor. If it’s a two-lane road, then you might face a head-on collision, but maybe there is room for the oncoming driver to stop in time, or reduce speed to render the collision likely non-fatal.
A human driver probably can’t weigh all of these considerations in real time, but a computer would have more capacity to do so, and presumably would be expected to. The question then becomes how?
This is an interesting question to casually broach at a dinner party, debate group or internet forum or some such. But realistically, I think we overestimate how important this actually is for these cars. Trying to program cars with things like ‘moral algorithms’ and software that supposedly prioritizes a driver’s safety over pedestrians is over-complicating the matter, and would just create more room for error and liability. Realistically, a self-driving car will simply slam on the breaks when it figures that a frontal collision is about to happen. No swerving or drifting or whatever. That kind of action would probably cause more harm anyway. We’ll just have to rely on the prettyyyy good brake reaction times (I’ve read that the average break reaction time of a driver is 2.3 seconds. Would this mean that self-driving cars could effectively begin breaking an average of 2.3 seconds faster than humans? that’s pretty huuuge if true!)
No one doubts that self-driving cars will have collisions and that people will die because of them. But they don’t have to be perfect. They just have to be better than humans (this is not a very high bar).
And you’re both wrong.
Just think about it from a legal perspective if nothing else; you think people would be happy with an algorithm saying “Hey, I figured knocking your child over was better than killing the occupants, so…”
I have worked on safety-critical systems. Unless you’re working on a military system you don’t get to deliberately kill people, even to prevent others dying.
Any such harm is always implicit e.g. “If system becomes critical, allow water to overflow” (which we know may cause a storage tank to blow out, which we know may harm people in the storage facility, which we consider less important than preventing a meltdown. But nowhere in the code is “Kill the people in the storage facility”)
Not at all. I’m not saying there are no possibilities for accidents. And indeed there are *unavoidable *accidents, for even a perfect AI.
What I’m dubious about is this dilemma coming up. All the examples I’ve heard need the AI to do something dumb first to get itself into the situation.
Please at least read our examples before making that statement. We’ve talked about constrained lanes, girl and boy scouts jumping suddenly out of the bushes into a live highway (which is a death sentence if you do that today with human driven cars), and situations where in a crowded highway, a crash leaves the SDV nowhere to go.
In that case, the solution is “the oncoming vehicle is also self-driven, and both vehicles come to a stop without hurting anyone”.
This hypothetical can only occur due to human error. Eliminate the human factor and you eliminate the problem.
![]()
Apart from a link to a webpage of miscellaneous videos, which I don’t have time to watch, I’ve responded at length to all of the examples posed.
A human jumping into the road is a problem even a road system with 100% SDCs will not solve.
The engineers working on the safety critical system of an SDC right now - all the thousands of them probably working this week - must deal with the design requirement that the vehicle make the best decision feasible in a mixed roadway that will be mostly human driven for the design life of the system they are working on.
They don’t get the luxury of working on a system where you can just default to a “safe state” without thinking about it. Maybe you worked on factory equipment, controlling a CNC machine or something. Activating the regenerative brakes on all the motor controllers on that CNC machine (or killing the power completely) always puts you into a safe state, no matter the state the CNC machine is in. Unfortunately, a lot of safety critical equipment today, there is no one perfect failure recovery path. If you have an electronics fault and you’re controlling a ventilator, killing power is going to kill the attached patient. If you have a systems failure and you’re controlling an electronic stabilizer for an airliner at altitude, you better have a backup because if you completely stop active stabilization, the airplane will experience “irrecoverable loss of control”.
"*I have worked on safety-critical systems. Unless you’re working on a military system you don’t get to deliberately kill people, even to prevent others dying.
What I’m dubious about is this dilemma coming up. All the examples I’ve heard need the AI to do something dumb first to get itself into the situation.
- "
You falsely claim it’s the AI’s fault if people jump in front of the autonomous car. I would like you to elaborate on this claim, as we have specifically given examples where it is the fault of the people who did the jumping.
Not the AI’s fault for being in a situation where a possible accident might occur, the AI’s fault for ending up in a constrained situation, where it faces the dilemma of “Who do I kill?”
For example, Wolfman, I think, posted the video of a truck suddenly cutting across the driver’s lane, and the driver being forced to swerve into the hard shoulder.
And he’s right that that’s a common kind of incident to occur, as drivers often don’t realize there’s a car in the lane beside them, in their blind spot.
There are things you can do to reduce the risk of someone cutting across you (e.g. don’t linger in another vehicle’s blind spot), but as we saw in the video, the risk is there even if you don’t do anything wrong.
Unfortunately though, this doesn’t invoke the dilemma. An AI just needs to swerve like the driver did. So then the hypothetical was proposed where now there’s a parked car in the hard shoulder, so the AI would have no escape route.
But the problem with this change is that any good driver would now have cause to be extra cautious: I don’t want to be passing two cars at nearly the same time right next to an exit. Let alone the fact that, legally-speaking, you’re supposed to slow down if passing a broken-down car on the freeway anyway.
This is what I mean (and did already explain) about why these examples don’t work.
The physical roads that exist in the United States have the possibility of these constrained situations all over. They may be rare but they do exist. If you’re just going to keep denying that this can ever happen :
a. I have to doubt your claimed qualifications. Programmers are used to having cases where an extremely rare condition may happen. It’s easy to just not handle that condition, assuming it’s so rare it will never happen. When you do that, usually what you find out later is that this “rare” condition is happening all the time and you now have a bug in your ticket queue.
b. I have to doubt your driving experience. If you’ve driven a car, you would trivially see that this kind of thing happens all the time and if you had perfect sensors that were still only attached to the vehicle you were on, you would not be able to avoid every possible situation.