Self driving cars, kill one of me or two of them?

You wouldn’t even bother to try to program a computer that can recognize path A has a 99% likely 40 ton truck coming this way, path B has a 99% likely person, and path C has a 99% likely trash bag in the wind, and choose a path on that?

I guarantee you they are doing everything they can to recognize what paths are physically possible, and what is in them, it is criminally irresponsible not to.

This dilemma seems to be the only thing people talk about WRT self-driving cars.
I don’t think it’s that difficult in practice. Writing programs that make decisions that could potentially kill humans is something we’ve been doing since the 60s. They generally work like this:

  1. In general, do whatever you can to make a dangerous situation unlikely
  2. If a dangerous situation looks possible in the near future, what can be done to avert it?
  3. If you can’t avert it, what can be done to reduce its severity?

Taking this logic to self-driving “What-ifs”, it will mean that in the highly unlikely situation of being unable to find a swerve path that doesn’t end in a tree or human, the car will just brake hard to try to reduce the severity of the impact. There is no need to take a “Shall I kill Alice or Bob?” approach.

ETA: Reducing the impact might mean hitting the tree if it’s much further than the main obstacle: more time for the car to slow down. But it’s playing devil’s advocate talking about such a situation as it’s just not going to happen.

There is a need to take that approach. Braking hard to reduce the severity of the impact is worthless if there is a 40 ton truck heading at 60 MPH directly into that impact point.

As societies, we can choose whatever rules we want. But as people living in cities with presumably LOTS of self-driving cars in them, our rational choice is to prefer that there be a standardised set of rules that car manufacturers should adhere to (so, treat it as a safety issue) and that these rules should prioritise the least damage to the least number of people, regardless of whether they are the occupant or not.

While you’re busy thinking “what do I want my personal self-driving car to do?” what you SHOULD be thinking about is “what do I want all the OTHER self-driving cars around me to do?” And the answer is not “throw me (heh) under the bus to save the car’s occupant” when I’m the dude in the next car, or walking down the street, or waiting to cross at the lights.

Since you are at any point likely to be in the presence of many more self-driving cars occupied by other people, than self-driving cars occupied by yourself, game theory suggests you should give up the warm fuzzies associated with your own vehicle treating you special, precisely in order to “buy” the privilege of other vehicles treating you more benignly.

So this is Janet doing The Trolley Problem instead of Chidi?

There are 3 Trillion miles driven in the U.S. per year. And when Self-drivers start making a large percentage of those miles, It most certainly will happen many times. And someone is making consciously making decisions how to tell the computer who lives and dies in those cases.

There is another principle I omitted because I didn’t want my post to get too big:
The more constrained a situation becomes, the more dangerous it is.

If the road in front of me is clear, but the lanes beside me are packed with cars, does that influence how fast I drive? You betcha. I’m not going to drive at a speed where if one of those cars were to swerve into my path without warning I would have no option but to plough into into it. Being boxed in is inherently dangerous.

And how fast do I go round the archetypal “blind bend on a mountain road”? Very slowly, naturally.

So going back to the “What-if”, how did the self-driving car even get itself into a situation where there are no directions in which it could swerve and/or brake to avoid a collision, and it’s driving at a speed where some kind of collision is unavoidable?
In your hypothetical, how exactly does the car find itself in a situation where its only options are “hit truck” and “hit pedestrian”?

But in answer to your question, if there’s nothing it can do to avoid a collision, there’s nothing it can do. Similarly if a massive meteorite were to drop on the car the AI would be similarly ineffective.

But that is a decision to kill Alice (assuming Alice is driving). This isn’t some bizarre edge case that will rarely happen in real life. It is a pretty basic, common situation that results in crashes on every highway in the world on a regular basis. You have two lanes of traffic, one suddenly becomes blocked by large immovable object (e.g. the truck in the example above). There are only two options, hit the truck, or swerve into the other lane (the hard shoulder in the example above). In the example above this was a trivial answer (as the hard shoulder was empty).

It is not a contrived thought experiment to imagine the perfectly common case that instead of being empty the other lane has a less immovable object (that has humans in or in front of it). By writing the code that decides what to do in this case you are absolutely solving the classic “Shall I kill Alice or Bob?” trolley problem.

Well implicitly perhaps.
There’s a big difference though between writing code that explicitly says “Kill pedestrian” and code that does everything it can to avoid any kind of collision, and the worst case is an unavoidable collision at a greatly reduced speed, but in the worst case that slower collision still results in a fatality.

You didn’t complete the description of this scenario. What lane is blocked by the truck? If it’s my lane, I should be driving a good distance between the car / truck in front. If it’s the opposite lane, what’s happening in my lane?


I drive a motorcycle in a city with an appalling motor fatality rate (Shanghai), and have done for years. My secret has been defensive riding, anticipating possible problems, and being aware of when my surroundings are becoming more constrained. I become very cautious any time my escape routes look limited.
If these What-ifs were as hopeless as some here wish to posit, I’d be dead 50 times over. I regularly encounter situations like cars swerving across me without warning, overtaking cars coming at me in my lane, pedestrians running into the road, etc.

I’ve actually written simpler solvers and the actual source looks more like

np.min(path_costs), things like that. The computer never thinks of killing a pedestrian. It just works out the risk of a given path and chooses based on an algorithm. Current gen technology won’t ever be 100% sure a pedestrian is there - maybe 95% sure - and it might think a plastic bag has a tiny chance of being a pedestrian also. So it has to use a weighted score system.

Our point is you do need an algorithm that is robust against these cases, no matter how seldom they might come up. What if it’s an interstate highway underpass and out of the bushes, a whole troop of girl scouts jumps into the main highway lanes at this narrow point? There was no way the car - or any driver - could have known this would happen. So now the car can either drive into the bridge piling, killing it’s own occupant, or run over a girl scout.

My algorithm runs over the girl scout. She shouldn’t have gotten in the road. Amusingly, if there’s a choice between 2 girl scouts, the skinnier one - or the one who is alone, not in a bunch - is a lower damage target (to the car)

Now, yes, the brakes are also fully engaged. The car is skidding to a halt as rapidly as the braking system can bring down the velocity - but this may not be enough.

I was talking about this example here. From an actual autonomous car*. It is a scenario that happens all the time There is a truck on the left that suddenly cuts off the car, driving in the slow lane, so he can’t turn left and braking alone will not prevent the collision:

The auto-pilot swerves into the hard shoulder to avoid crashing into the truck. Which is fine as there is nothing in the shoulder. If there was, the auto pilot would have to choose which thing to hit (and which person to kill)

*- Note the driver was later killed in an another collision, as the Tesla Autopilot is not as autonomous as he thought, but thats another issue.

The chance that the car will be programmed with standard algorithms to handle these situations is approximately nil. Far more likely, a neural net will simply be trained to handle the problem - perhaps by watching thousands of videos of actual collisions and near misses, or through training by simulation.

The truth is, we’re not going to have the foggiest notion of exactly how the AI came up with the decision it did, just like we really have no idea how a computer is beating us at Go.

But this is going to take a radical social change towards acceptance of this kind of thing. When a human is faced with a split-second decision and chooses poorly, we have empathy and we understand that does not speak to character or likelihood to repeat, so we grant a lot of social license to such people. We only punish them when we believe there was actual malice or intent to harm or careless reckless behavior.

But if someone has a dog jump in front of their car and they swerve and hit a child, we tend to understand and forgive. The question is, are we willing to grant such social license to an automaker’s AI? If the AI swerves to avoid a dog and hits a child, the automatic tendency is to believe that it was programmed to do so, or if it wasn’t, then someone was lax in allowing that program to work that way.

Or, we may just be squicked out about it. For some reason I believe that I am much more likely to be okay with riding as a passenger with a fallible human driving who may have ten times the accident rate of an AI as I would riding with an AI that I knew could make a fatal decision for reasons of its own I can’t understand - even if those reasons result in a national fatality rate much lower than for human drivers. Especially if on occasion those AI’s choose to essentially sacrifice their passengers, and the reasons for doing so might be not always be so crystal clear.

And tort law has some catching up to do. How liable is a manufacturer for the behavior of its AI, when the AI is a learning neural network instead of a collection of fixed algorithms? You can’t unit test a neural net. You can’t even explain why it makes the choices it makes - just an assurance that it’s making better, more efficient choices than people do. But that’s a hard thing to prove - especially when there’s a smoking wreckage with a few corpses in it.

Completely and totally wrong. Yes, neural nets are used for portions of the system, but no, they are not nearly as black box as you think. This is not how the state of the art solvers deployed by google work.

That’s a pretty soft example. If the car had been driving more defensively it would have positioned itself so that it wasn’t immediately adjacent the truck. When I drive, I’m hyperaware of that kind of situation for exactly that reason. In that scenario I would have drifted back as the truck merged with the adjacent lane so that there was space for him to continue moving right if that’s what he wanted to do.

Edit: What I’m saying is that there was nothing “sudden” about what happened there. The possibility of the situation evolving the way it did was foreseeable from the moment the truck appeared on the on-ramp.

I spend a lot of time driving on roads with one and a half lanes. They have two lanes but there are cars parked next to the curb that infringe the outer lane. Cars driving in the outer lane sometimes have to drift into the next lane to go around the parked cars. Almost everyone drives so that no one is immediately adjacent another car. This means that cars are free to merge in and out of the lanes as required to avoid the parked vehicles.

Avoiding accidents is not about swerving in the right way when a collision is about to occur. It’s about avoiding those situations in the first place.

The correct answer is, of course “kill zero people, inside or outside of the car”. The fact that anyone even considers any other answer is just yet another proof of how much we need self-driving cars: We’re such lousy drivers that we can’t even conceive of the possibility of a driver good enough to not kill anyone.

Whenever you posit a situation where someone’s death is unavoidable, you’re already past the key decision point. A competent (i.e., not human) driver would avoid getting into that situation to begin with.

Ah I see.
My opinion is pretty much what Richard Pearse said.

If I’m passing a car that’s parked in the hard shoulder, and I can’t change lanes, then I slow down. In fact the UK highway code explicitly says you’re supposed to do that.

If I’m passing a car parked in the hard shoulder right next to an exit then I have more reason to be cautious.

If I’m passing a car parked in the hard shoulder, right next to an exit, and a car is in the lane beside me, and it’s level, or close to level, with my car, then all my alarm bells are going off.

Once again I just don’t see the “Who do I kill?” dilemma coming up. Unless we’re talking about the “Egg On My Face AI System”, it’s got to be anticipating that danger and driving defensively.

Too complex.

Aim for the ugly one.

Regards,
Shodan

PS - not that it matters. Any algorithm that minimizes loss of life just leaves more people alive to sue the manufacturer.

So you think self-driving cars are going to limit to a speed of 10 mph or so?

The idea that these situations can be avoided seems ridiculous to me. Driving defensively will reduce the likelihood of a collision (and I am sure autonomous cars will be far far better at it than humans). But they fact is these kind of dilemmas happen ever day on roads all over the world. As in the case I linked above you are driving along at a safe speed in the slow lane with traffic on your left and (unlike the video above) a car or motorbike is broken down in the hard shoulder on your right. Suddenly the truck on your left pulls into your lane without looking. Unless you have slowed from highway speeds to a crawl (which no driver AI or human will do) your only option is hitting the truck or the car.

The reason why this issue has received such attention. Is that these ‘trolley dilemma’ situation happen all the time, but as a human driver you are lucky if you have a fraction of a second to react in these situations, and all you can do is instinctually slam on the brakes or swerve. You aren’t thinking at all, let alone pondering the finer points of ethical theory.

But for a computer running billions of operations a second, that fraction of second is a plenty of time to run a fairly complex algorithm to decide what to do. And the programmers who wrote that algorithm has plenty of time (if they feel like it) to study the finer points of ethical theory.

That is what has changed. These trolley dilemma scenarios were completely hypothetical thought experiments designed to test theoretical human ethics. With the advent of autonomous car there are programmers who job it is to “solve” these problems, and the decisions they make will cost lives.

The “the occupants always come first” rule IS a solution to the trolley problem. And the companies that have that policy have clearly considered ethical issues when they came up with it.

The algorithm is also going to prioritize hitting the girl scout currently running away from the vehicle, as it minimizes the impact velocity.