Autonomous Car - Should it kill you, or two pedestrians?

I agree with those who say autonomous vehicles should preserve it’s occupants first and foremost. I certainly would not get into a car that is programmed to, let’s say, drive over the side of a cliff in order to avoid a mannequin dropped on the road.

When (and if) totally autonomous (completely hands and feet-free) cars become ubiquitous on our roadways, do you anticipate laws and the insurance industry changing with regard to liability in the event of crashes?

Examples:
If two or more autonomous cars are involved in a crash, assuming all involved vehicles are well-maintained, could and should fault be assigned to the person in the driver’s seat of the vehicle that appears (according to the police report) to be most at fault? Should that driver’s insurance rates go up as a result?

If totally hands-free operation becomes legal, should DUI charges be eliminated (or carry less penalty) for inebriated passengers sitting in the driver’s seat? What if all the passengers are drunk, but no one was sitting in the driver’s seat?

[del] If an autonomous car swerves to avoid hitting a bag of toy kittens and instead runs over a bag of real kittens…[/del]

If totally autonomous vehicles become legal should accident liability become no-fault by default? Or, should liability be assigned to the first vehicle involved in a multi-car crash? Should the car manufacturer assume any liability if the software is determined to be functional and compliant with industry standard?

The autonomous car should be killing all the pedestrians before picking me up. Then there will be no dilemma. Problem solved. Society fixed. All the Nobel medals to me.

Yes, of course. Note that in this scenario one of the likely things to happen is that many people won’t own an autonomous car, they’ll just uber them.

In the event of a crash they’ll be a wealth of data on what happened; a “black box” of sensor data in every direction. We can assign fault very precisely. And of course the occupants will be 0% at fault (unless they actually own the car and skipped some essential maintenance)

In your original scenario cars would likely not even have a driver’s seat.

Between now and then though we’ll see a gradual changing of the law. Right now there will probably need to be someone in the driver’s seat, and she’ll need to be sober. And a crash may be dealt with like any DUI (because they’ll say, even if the software was at fault, perhaps she could have realized in time and corrected if she wasn’t drunk)

I think this is the real dilemma faced by autonomous cars. The road rules are a set of rigid procedures that have to be used by people operating in a very flexible and changeable environment. It is not practical to follow every rule to the letter 100% of the time. If you design your autonomous vehicle to be strictly law abiding then there may be times that it drives so impractically that it becomes a serious burden on the traffic flow. On the other hand if you design it to be flexible with the rules (as I understand is currently the case) then where does the manufacturer stand legally?

If it’s a choice between killing me or two pedestrians, it’s damned sure not going to be me. Sucks to be the pedestrians, but them’s the breaks.

Let Spock program it

The needs of the many outweigh the needs of the few
Or the one.

R.I.P sole passenger

You obviously play tic tac toe. You need to learn to play chess. Or, ref Spock, 3D chess.

The needs of society for autonomous cars is the true need of the many. That’s what saves 30K lives per year just in the US alone; a fairly safe country as automobile deaths go.

To get the public to accept autonomous cars, they need to believe the needs of occupant(s) outrank the needs of any bystanders or occupants of other cars. Absent that they will not buy them, use them, support them, or permit their politicians to approve them.

So if we want to meet the *real *needs of the many, the cars will be (mildly) selfish about their occupants.

Q to QB3-level 2 for checkmate.

In response to the OP, autonomous vehicles will prioritize minimizing injuries to the vehicle occupants. I am a scientist involved in hardware development related to autonomous driving and advanced driver assistance system components. There are pedestrian protection technologies in development (and possibly on public roads already in limited cases) with hood lifters and external airbags and more ductile crumple zones. I am not intimately involved in the software and algorithm end, but I understand that avoiding larger objects is a priority. I do not expect that autonomous vehicles will ever be perfect in preventing fatalities, but they will be an enormous improvement over human Earthling brain guided vehicles. Typically, over 40k people are killed every year on the road just in the USA.

But “prioritize” implies that the software is aware of and factoring in injuries to other people. I doubt this very much. The software no more needs to think “him or me” than I do right now.

Even playing devil’s advocate, the only time I can think the software might need to make a life or death judgement is to differentiate between a small animal in the road and a very small infant or baby – obviously in the former case it might decide the safest thing is to hit the object, but in the latter it should treat the object as though hitting it would cause a fatal collision (because it would).
It won’t decide to throw the car off the side of a mountain road though, because like I say, there’s no reason for the car to put itself in the position of needing to do that.

It’s the closest to what I was thinking as well.

The thing is, although some good serious posts and thoughts appear in this thread, most people are like the OP: starting AT the moment of decision, failing to realize that the real story AND the “solution” to the concern, is in what leads up to that moment.

What is already happening with autonomous vehicles (such as commercial jetliners) is that the programming is designed to see to it that the vehicle does not arrive at a point where such a decision is necessary.

In any scenario that a programmer would have an automated vehicle get itself in to, the car would ALREADY have been slowed down to allow for it to stop, almost literally on a dime.

It’s actually identical to what we ALREADY do, through traffic laws.

After all, what traffic laws are, is PROGRAMMING. They are written to guide drivers to behave at all times, in such a manner as to minimize the possibility of an accident even occurring in the first place.

The same thing applies to autonomous vehicles. In other words, the programming which LED the vehicle to a place where pedestrians are present, would have included slowing down BECAUSE of the pedestrians, in order to deal with them.

And similarly, the pedestrians, would have been “programmed” by laws, not to put themselves in front of a moving vehicle with the right of way.

Thus, I side with the people who are saying that no “ethics” programming is required, because the “ethics” are already built into the overall transportation system itself.

You guys are assuming that autonomous cars are a really big computer program with a bunch of special cases and someone is programming all of these laws and decisions into them. That’s only somewhat accurate. While self-driving cars is an immensely complex task with significant software engineering behind it, (including some special cases overriding the underlying AI), most of it is machine learning.

Specifically, a lot of it is based on deep learning, reinforcement learning, and imitation learning techniques. That is, the car will be fed an immense amount of training scenarios done by good drivers in simulators (or driving the car itself in real life), as well as the AI being allowed to make mistakes by itself in controlled environments and simulators and given feedback.

It’s unlikely in most of these scenarios that even the lead engineers on the projects could predict exactly what the car would do, it depends how it interprets its sensor input and how that relates to the training data it’s been given, and the mistakes its made itself. Hell, it’s such a complicated recognition problem it’s highly unlikely the engineers could force the car to recognize this scenario with enough reliability to even tell it to do anything.

The only way to know what the car would do would be to throw it in that situation, and even then, it’s possible that if done at another time in slightly different conditions it would act completely differently based on any number of factors.

It’s likely the cars would generally choose to preserve the driver’s life, but that’s largely my educated guess due to how RL programs are trained than it is a certainty.

But “prioritize” implies that the software is aware of and factoring in injuries to other people. I doubt this very much. The software no more needs to think “him or me” than I do right now.

Even playing devil’s advocate, the only time I can think the software might need to make a life or death judgement is to differentiate between a small animal in the road and a very small infant or baby – obviously in the former case it might decide the safest thing is to hit the object, but in the latter it should treat the object as though hitting it would cause a fatal collision (because it would).
It won’t decide to throw the car off the side of a mountain road though, because like I say, there’s no reason for the car to put itself in the position of needing to do that.
[/QUOTE]
I do not intend to imply this, though I could have parsed words better. If I understand correctly… If two pedestrians dart out into the road path of an autonomous vehicle, then the vehicle will brake hard until stopping or brake hard with a swerve depending on if a clear swerve path is available and the straight ahead stopping distance is deemed insufficient. It is in the best interest of both pedestrians and single occupant to avoid collision or, if collision is imminent, minimize delta-velocity upon impact. I am not aware of any autonomous systems that weigh consequences of various outcomes and calculates a move to put occupant(s) at greater risk in favor of avoiding pedestrians.

Yeah, this is pretty much a non-issue. In order to make the determination of who/how many to injure or kill, the AI would have to actually calculate the potential injuries and deaths of any given action. That’s something even humans today don’t (and can’t) do. By the time we get artificial sensors and intelligence that can do that, we will have gone beyond what humans are capable of deciding. At that point, it won’t be us humans deciding on the ethical preferences. Like it or not, it’ll be the super-human AIs making those decisions. We’ll just be going along for the ride.