Self driving cars, kill one of me or two of them?

I simply made the observation that none of these common trolley examples actually “work” (in that a defensive driver could conceivably find themselves in such a situation)

I haven’t ruled out that such a situation might be hypothetically possible, but I’m waiting for an example. I find it amazing that this topic has been talked about so much, and yet the examples often cited just don’t work.

This isn’t even rational so I’m not even sure how to respond.
If a hypothetical is flawed and won’t actually happen, then it won’t actually happen.
And meanwhile, some very rare conditions are indeed very rare and don’t magically become common.

Plus of course you ignored the points I made. Do you not think a company would be criminally liable if they actually had code that said “Kill A to avoid killing B”?

Well I’ve driven cars for decades, and now, like I say, I drive a motorcycle in a notoriously dangerous city in terms of traffic fatalities.
Strange and dangerous actions happen all the time around me and it’s necessary to give myself the space and time to react. Like I say, one very simple principle is: “The more constrained my situation becomes, the slower I ride”. You don’t speed into a narrow path where you’re sweet out of options if one of the drivers around you does something stupid. Because that’s often what happens.

And again, I responded in detail how a responsible driver should approach these situations. And no-one has pointed out any error in the approach that I described.

I showed you several examples. I am truely amazed (as some who drives on highways and cities all the time) that you don’t understand the pretty basic fact that if you drive a car you might end up in a situation where you end up killing someone (including yourself). Yes, defensive driving will reduce that chance, but it happens. Everyone who has ever driven for long enough has encountered situations close to this (I know I have, several occasions where if object X was slightly closer or to the right I would have hit it).

This is not some crazy edge case, its a thing that is happening right now to someone on a road somewhere. You are driving along, at the speed limit, observing what’s around you, following all road laws. Your lane becomes blocked by an object that is closer to you than your breaking distance, there are objects in the lane to your right and left. You can either hit the object in front of you, the object to your left or the object to your right. As humans though we aren’t going to carefully analyse the ethical implications of hitting those things, we just react.

Because we are humans and aren’t going to have time to analyse the fact that the narrow alley is there, big enough to fit us, unobstructed, etc. etc. in the

A few years ago I was driving in the Rocky Mountains, and a large rock rolled onto the road right in front of me. I could have swerved to the right to avoid it, but I had been looking at that shoulder for miles as it was soft and crumbly and a long drop on the other side. Also, it was winter and even though the road looked okay there was black ice around.

So, I elected to just hit the damned rock. There was a massive bang and the car lurched to the right anyway, but only part way onto the shoulder, which held fine.

Now, if an AI had been driving. Would it understand that the shoulder was soft and dangerous? Would it understand that there was a large drop on the other side? Would it assume there might be black ice? Or would it attempt the swerve?

But there is a more important point. Suppose it swerves and goes over a cliff and kills the passenger. Who is liable? What if it doesn’t swerve, but hitting the rock throws the car out of control and kills the occupants?

It seems to me that in a case like this, no matter what the car chooses to do, if there is an injury or a fatality there are legal ramifications. As humans. We tend to give other humans a certain amount of moral/legal license when split-second decisions go bad. Are we going to do the same for automated cars?

The legal issues could kill self-driving cars just as they killed the Segway as a mass market people mover. Once cities started regulating Segways like bicycles, that was the end of the dream. So it may be with driverless cars. Or maybe not - this is an area where the future is completely unpredictable.

You’re right in that the future is unpredictable. I initially thought that the legal issues would kill SDCs in their infancy. See, the problem is, suppose you’re Waymo corporation facing a lawsuit for someone your SDC killed. But your SDC is amazing. 19 people for every person you got killed owe their lives to you. Well, in a court of law, you don’t get credit for that, and technically, a jury/judge (depends on the state) can assess infinite damages (though this might become a matter for Federal courts and subject to review by Federal judges which might keep it reasonable) for killing just one person. Even though you saved 19.

And if it’s a 1 billion judgement, Waymo has to pay it. They have the cash to pay it, and can be forced through actions by law enforcement to pay up, ultimately.

Contrast this to a world where all 20 people died from human driven cars. Nearly all humans have barely any assets, so for each of those deaths, the responsible driver (or their estate) is usually not even deep pocketed enough to be worth suing. In many states, the liability insurance can be as little as 30k.

The corporations trying this are the most powerful in America, though, and technical solutions exist where you could reasonably argue the car is mostly blameless even when someone is killed. After all, there’s going to be detailed video for nearly* every fatal crash an SDC was involved in, from multiple angles, and a log of what the computer thought the situation was and why it choose to do what it did.

There should be legislative protection. I think if an SDC can be shown to be, say, 10 times safer than humans, then there should be protection similar to the protection given to vaccine manufacturers. Vaccines save countless lives, but occasionally do harm a recipient. So if you’re a vaccine victim, you can’t sue - the law gives the drug company that made it immunity. You go to a board and will be given a certain amount of money depending on the injury.

*the data recorders are flash drives embedded in the car somewhere, obviously some extreme accidents could destroy them, though presumably they will be shielded against fire and probably mounted under the driver’s seat or down in the floor somewhere.

Do you mean the video of the Tesla supposedly saving its occupants from crashes? Those were poor examples that would be avoided by defensive driving, something the Tesla autopilot was not doing in each case, which I find surprising.

You need to come up with a cite for this right here. All of it.

Which part? http://www.nytimes.com/1999/07/10/us/4.9-billion-jury-verdict-in-gm-fuel-tank-case.html

Here’s a cite that is an example of the upper end. Plenty of million dollar judgements against automakers. But, today, cars don’t record the exact sequence of events from multiple camera angles. There’s uncertainty, uncertainty that the attorneys defending the automaker can exploit. An autonomous car causing a fatal accident where the car was under complete control and the recording survives leaves no uncertainty.

And under what legal mechanism does the manufacturer get credit for saving 19 people? Let’s suppose I’m a mad scientist and I invented a cure for cancer that works. Instead of waiting on the FDA, I just dress up like a doctor, got to a cancer ward, and inject my cure in 20 patients. And it works and 19 patients are cancer free. Doesn’t protect me from charges for murdering the last one…and potential life in prison for that, even though I saved 19 people.

How about the part where the law “doesn’t give credit” for saving 19 people during a car accident.

Quote:"Basically, instead of thinking of the problem like a series of edge cases, and then insisting that some edge cases are so unlikely that they will never happen in the next 30 years with hundreds of millions of autonomous vehicles on the road, just look at the algorithm.

You want to get the outcome of :

minimize damage to the occupant of the vehicle
minimize damage to other people
minimize damage to the vehicle"

Let’s just reverse 1 and 2, then, since the reason for this order seems to be “no one will buy our cars if we did it the other way,” which is another way of saying “let the market decide,” which is pretty much saying “who has the gold makes the rules.” Well, to heck with them. In fact, may I modestly propose that in any circumstance where the vehicle has to choose between the occupant and anyone else, including irresponsible scout troops, it simply activates the auto-destruct and reduces vehicle and occupant to atoms? I’m not saying we wouldn’t get our hair mussed, but it’s one less resource-guzzling appliance off the road. My Birkenstocks must be around here someplace…

Then all you’d have to do to murder someone is toss a dummy or someone you don’t like in front of an autonomous car on a mountain road. Car, in order to save the person outside the vehicle, drives either into the mountainside or off the edge.

Or, if the car is programmed to save the passengers, all you have to do is push your victim in front of the car, which will drive over him.

Right, and I responded to those examples. What has not happened yet is anyone pointing out any error in my responses.

This is why I’m still saying these hypothetical dilemmas don’t work; a safe driver would not find themselves in these situations.

I am truely amazed that you’re saying that after I said: “I’m not saying there are no possibilities for accidents. And indeed there are unavoidable accidents, for even a perfect AI.” [Emphasis in original]

All I have said is that all the examples so far of an AI needing to choose to kill Bill or Ben don’t work; they rely on the AI doing something irrational first.
I don’t even rule out that some kind of dilemma situation might be hypothetically possible; I’m just saying none have been demonstrated so far. Which is fascinating for a problem which has been discussed so, so much here and in the media.

Firstly why I am travelling the speed limit if I’m boxed in?
Secondly, the proper distance to keep from the car in front is far enough that you can brake in time. So it can’t be a “blockage”, you mean a car swerving across, say, and that is something I can be alert to; I can notice the positions of cars in the lanes beside me, and be very concerned anytime I’m in the blind spot of a car travelling approximately the same speed as me.

But finally, yes of course there are hypothetical situations where all you can do is brake hard and brace for impact, and that’s what an AI would do too.

Not sure where you were going with this unfinished sentence, but again I have to emphasize that there is no leap of faith when riding my bike, otherwise I’d probably already be dead. I’m being serious about that.

If I’m boxed in, I slow down. If I don’t know whether I’m about to be boxed in, I slow down until I’ve evaluated the situation. I’m not afraid to (gently) stop if something very unusual is happening in front of me. You do not speed into dangerous situations and hope for the best.

Only if there’s no other place for the car to go. If so, I consider that a reasonable outcome. If you jump in front of a city bus, nobody expects for the bus driver to swerve the bus into the sidewalk or a lamp post or something to avoid killing you. It’s a cliche, even - jump in front of a city bus, and you’ll be lucky if the driver hits the brakes before killing you.

Your responses are bullshit. You assume a form of constraint on the problem that doesn’t exist. You are assuming because you can indeed drive - or ride - very smartly, avoiding most situations where a crash is unavoidable, that somehow your personal experiences will be true on a hundreds of millions of vehicle scale. This is incorrect. Your personal experiences are as worthless as a lottery player talking about how they never win.

None of us are saying that totally unavoidable, no good choice situations are going to be common for autonomous cars. Hell, the average driver may not have a no good choice situation happen in their entire driving lifetime. But they are possible, and you need an algorithm that can find the best choice when no perfect choice exists.

I’d say the average driver would almost certainly never have such a situation in their lifetime. You are positing incredibly unlikely scenarios.

I’m not even sure what your “bullshit” point even is, so let me summarize again what I’m saying.
I’ve only made 2 points in this thread:

  1. The examples of dilemmas a self-driving car would face, don’t work i.e. they rely on the AI doing something stupid first. Note, I’m not making the claim no such situation is possible, only that no example has been given yet.
  2. Self-driving AI is not going to intentionally take lives, no matter what happens. It would be a legal nightmare. In a absolute worst case where no collision can be avoided, it will just brake hard.

What part of this do you disagree with?
And what part of this implied I thought all accidents could be avoided (especially since I’ve said the opposite, explicitly, twice now)?

I meant below the speed limit (as in not speeding)

You can’t ever keep a “proper distance” that will prevent all crashes. It is delusional to think you can. There were always be case where a object appears in front of you too quickly for you to stop for countless reasons (e.g. the car in front of you crashes, a car enters your lane, a pedestrian crosses street, a truck reverses out, a car leaves oncoming lane, etc. etc. ) Additionally there is no guarantee (as in those cases) that you aren’t going to see the stationary/oncoming object at the same time as the you see the objects on your left or right (and the objects don’t need to be cars, do you slow down to walking speed on the freeway every time you pass a bridge?)

That is just something you have to accept if you are driving. You can minimize the chances of that happening by driving defensively. But they happen, they aren’t weird edge cases, they are actually happening to someone right now somewhere in the world.

I said proper distance from the car in front, i.e. in your lane.
You always should keep that car out of your braking distance, and I’m interested to hear any excuses for not doing so.

Objects appearing in 3 directions at once, without me being able to anticipate any of them? Like when?

Again, what’s a safe speed depends on what kind of road we’re talking about. On the freeway, lanes are very wide, bends are very gentle, and if there are bridges, they are usually far from flush with the side of the road.

But if the freeway were suddenly to narrow, such that I had to pass under a bridge that had pillars just a couple metres from the side of the road (as required for a human to conceivably run out in time in the last seconds), yeah I’d slow down. How much I slow down would depend on how narrow we’re talking about.

Yes obviously. It’s getting annoying to have to re-confirm in every post that there are undoubtedly unavoidable accidents. I never said otherwise.

I like the one at 40-someodd seconds. There is a double human error compounding the situation:

a) a car clearly missing its exit stopping cold on the interstate (as opposed to the unoccupied shoulder) to make the exit ramp at the last possible second.

b) the car behind the car above having both red tail lights out (the only light that lit up was the rear right yellow blinker).

And the AI still misses the wreck.

It sucked that the pig got clipped, but it was still able to run away immediately at speed. If hurt, it couldn’t have been catastrophically bad.

Then what are we arguing about? All we’re talking about in this thread is what to do when the accident is unavoidable.

Also, another detail that you probably don’t realize, but the way an AI sees the world isn’t quite like you think. Instead of perceiving objects as having a definite, solid presence in a specific spot, a better method is to see the object’s position and boundaries as having a probability distribution. More like a cloud of possibilities. This is because there’s sensor error, vehicle control error, the object moving on it’s own, and other factors that lead to a certain amount of uncertainty.

So when the car plans a path, it’s actually discretely adding up these hypothetical collisions with the object even if there’s only a 1% chance that say the red car in the other lane is actually 1 foot to the right or is going to swerve suddenly in to us.

So in the math, at a very low level, it needs to value these collisions properly, even if they virtually never happen. It needs to weight by the velocity difference, the amount of armor this particular car has for collisions from that angle, which seats are occupied by passengers, and so on.

In addition, there’s always going to be a nonzero chance of classification error. That snow falling might actually be a wall. That distant object might be another car, not an object on the side of the road. One way we can work this out is to collect data on what the classifier thought an object was in earlier observations from farther away and what it corrected itself to, and both improve our model, and also store how often this happens, so that we have an actual, usable uncertainty. The math of planning actually works just fine if we’re only 70% sure that distance object is another vehicle and 30% it’s a side of the road obstacle. We can choose a course of action that is optimal for the union of the two cases.