Self driving cars, kill one of me or two of them?

I came across the article and it does raise an interesting point, I hadn’t thought about it. Situations like this happen all the time today of course, but I don’t believe in the vast majority of cases people really have the time to make a moral decision, just a physical reaction.

What should we program cars to do when the physics make death, or serious injury unavoidable. A computer has the speed to recognize and make a decision. And of course by make the decision, it’s following the programing someone consciously decided to give it.

Should it protect the passengers in its car above all else?
Should the logic be dictated by government so we don’t get morbid brand competition about who makes a car that would run over a greater number of kindergartners to save the owner?
Should it take an instant judgement of who is most guilty for the unavoidable situation, before taking action?

Sorry , I don’t have an opinion quite yet, but as a computer geek, who has worked at some of these companies, and have kicked the idea around of applying at one now, my mind is a bit blown. It is one of the hypothetical made very real for somebody at a keyboard( and much higher up of course).

I think I’ve outlined this several times, but there’s a solid algorithm for this :

a. Rank paths the car can take by estimate risk to the occupants of the car.
b. Out of the least risky paths, take only paths that are within about <some small number> of the risk of the best path on the list. (that is, don’t even consider a path that is a lot riskier than the least risky path)
c. Out of the remaining paths, choose a path that doesn’t involve a collision with a human if there is such a path remaining
d. Out of the remaining paths, choose a path that doesn’t break the law if there is such a path remaining
e. Out of the remaining paths, choose the best path remaining

Done. TLDR, the concept is the occupant of the car always comes first, but the car will consider paths that are still safe, just a teensy bit less safe, in order to not hit humans outside the car and not break the law

For instance, if the car has a choice between hitting a concrete wall and a person at high speed, it’s going to mow down the person - the predicted danger to the occupants is less since a pedestrian is a squishier collision target and the SDC can set things up so that if the car has only an occupant in the left front seat, it can make sure the impact with the pedestrian is on the right side so as their body is predicted to fly through the windshield, it doesn’t harm the occupant.

At low speed, the risk either way to the occupant of the car is low, so it will choose the concrete wall.

If it’s at high speed but there’s a path that slaloms around the pedestrian using all 4 wheel steering, it’s going to do that.

And so on.

We’re not going to program self driving cars to do anything of the sort. There isn’t going to be an algorithm that takes as input a set of people and corresponding attributes and outputs who gets sacrificed over who.

Instead they’re going to be programmed, to the best of the developer team’s abilities, to respond to scenarios as safely as possible. This may involve determining how to avoid an obstacle (do I stop as quickly as possible? Swerve into an open space, but make sure I don’t overextend the handling abilities of the car and lose control?).

There isn’t going to be an evaluation of the value of specific human lives in the algorithm. The inputs to its algorithms will only consist of physical attributes of the surroundings as well as attributes about the cars handling / braking / etc. performance.

The first time that algorithm swerves onto a crowded sidewalk full of school children to avoid a truck, I suspect the lawyers (and PR people) may have some issues with this (if they don’t already).

Occupant always comes first is already what several manufacturers have stated publicly. Remember, this is only hypothetical. It would have to be a scenario where it’s a blind curve, and the school kids got off their bus on the highway and are jumping into active lanes. It requires active actions by the people outside the car to set up a scenario where the SDC is traveling fast enough that it must choose, and also in a scenario where all paths that don’t involve hitting something are blocked off.

Or that truck is barreling down a residential street at 60 mph out of control, certain to kill the occupants of the autonomous car if it hits them. In that case, jumping onto the sidewalk to avoid certain death (and clipping timmy and suzy) makes perfect sense.

It may be by another name, but there is going to be. There is no way around it. There will have to be programing to take the sensory inputs into account in unavoidable situations. Deciding that protecting the passengers above all else is making that decision. Hitting a person instead of a concrete pillar is making that decision.

See, here’s what Mercedes says : Mercedes' Self-Driving Cars Will Save Passengers, Not Bystanders | Fortune

It’s not a hypothetical anymore. Somebody has to put in the programming for what would happen in that situation, even if it never actually happens, and that makes it a concrete problem.

It’s not that hypothetical. e.g. this example :

If there was broken down compact car (or someone with their head under the hood of a broken down car) in the hard shoulder at that point then its the “trolley problem”. Do you crash into the truck, or pull onto the shoulder?

Well, the algorithm I mentioned is one that cares about traffic laws dead last, after caring about occupant safety and not hitting people outside the car. So it’s going to pull onto the shoulder, pull into oncoming lanes if they are clear - do all kind of things if it is forced to. If there’s a truck barreling behind it with failed brakes and it’s sitting at a stoplight but there’s a path through the cross traffic that can be achieved with EV level acceleration, it’s going to do it.

I was trying to say that the idea of “pulling onto the curb to avoid hitting a truck” is only going to happen if the velocity difference between the autonomous car and the truck is so huge it’s a lethal danger. If the collision speed is only 30mph, and this is a well airbagged, nicely equipped autonomous EV, it’s not very risky to the occupants to hit the truck with the frunk as a crumple zone.

But if that truck is coming down the street at 60 mph, or there’s a loose concrete sewer pipe that broke free and is coming down the hill or something, that’s a totally different scenario. The fact that the only way to avoid that lethal hazard involved hitting a child is just something you have to deal with in court.

In the example it’s in the slow lane, and a truck cuts it off from the left. It’s only option (other than hitting a truck at highway speeds) is to pull into the hard shoulder (which in this case is empty). If there is a small car (or a motorcycle) broken down in the hard shoulder your algorithm would drive into it (computer vision is quite sophisticated enough today to tell that a truck would do more damage to the occupants than a small car).

That’s right. It would. Now if it’s parallel, ASIC or GPU based planner is smart enough and a path exists that is possible to out accelerate the truck or somehow avoid hitting the small car/motorcycle, or brake and not hit very hard, it’ll do that. Sometimes reality gives you a shit sandwich, and I feel pretty strongly that if someone has to eat the shit sandwich, it should be the people outside the car.

There are greater societal implications. People will, for many years, have a choice between being a passenger in an autonomous vehicle and driving themselves. (eventually the ‘drive yourself’ option will become increasingly annoying and more and more discouraged by the authorities, but that probably won’t happen for many decades)

It is more likely that people will relax and let the computer take control if they know that the computer is going to do everything that is within it’s power, no matter what, to make sure they are going to be alive at the end of the journey. Which in turn means less total people killed, since traffic fatalities going to basically be directly proportional to what percentage of vehicles are autonomous.

The issue is that the first time it does this (and the people outside the car outnumber the occupants of the car) there will be both huge press outrage, and a massive lawsuit (against the company that wrote the code, not the driver). Whatever car companies claim now, I can see them changing their tune sharpish when that happens.

Are we talking about self-driving cars in the foreseeable future or something far out into the future when there’s hard AI? Because, for the foreseeable future, if the car thinks a crash may be imminent, I’m pretty sure it will be programmed to stop as quickly as possible while moving to open space if there is some. If not, it will just try and stop. It’s not going to, all Terminator-like, assess the situation and try and minimize casualties. It will just stop, while swerving if there’s space to swerve to.

Anything else is way, way off into the future, IMHO.

I am talking about a straightforward algorithm using existing tools. You can do a little more than just that.

I get in these random discussions at work all the time with my fellow consultants.

My argument was that the car should immediately use it’s facial recognition sensors to identify all the people in the area who could be potentially impacted by a… potential impact.

Then it should connect through wifi to social media to create a social, ethical and economic profile of all the individuals, using predictive analytics to extrapolate the future earnings and behaviors of everyone involved.

After networking with all of the other autonomous vehicles in the area, they should collectively choose paths that cause the least amount of damage to society, both economically and from a social impact perspective.

A more serious answer is how frequently does a human-driven vehicle crash into people or other vehicles as the lesser of two evils? I mean intentionally, as opposed to the driver instinctively swerves to avoid one hazard, not sensing the other or simply losing control of the car completely? I’m guessing almost never. IMHO, if you have time to decide which thing you’re going to hit and the level of control to actually do it, you have the ability to avoid the crash altogether.

A good computer controlled vehicle would have assessed the situation 100 times faster than a human, with much greater level of special awareness and would probably have already taken steps to avoid the crash before you even saw it coming.
Also, the answer is “B: Swerve”, then “A: Trees”. I can’t imagine under what circumstances “C: Crash headlong into a bus” would be appropriate. The car should avoid a collision if possible. It has no way to know that it will “send the bus out of control”, especially given that the bus is probably not in control anyway.

What can I say? I disagree with you. First of all, it would bring up all the liability issues above. Second, self-driving cars will be safer because they will be much more conservative in their driving. They are not going to try and BMW their way around a sliding truck while avoiding the motorcycle. They will come to a controlled stop as quickly as possible that that’s really that. Luckily, they will leave enough space between them and the car in front to be able to stop. Third, you’re talking science fiction at this point, and it’s hilarious that you’re getting so specific with GPUs and path tracing.

I’m not sure this thread really belongs in GD. It seems much more like an IMHO thread to me – what cites can I use to prove my case? What can you use?

I never said they wouldn’t be conservative. In fact, the algorithm above is conservative. Approaching a blind curve at a slower speed, given there’s a nonzero chance of a hidden obstacle around that curve, is a safer path than going at high speed.

But you can’t develop software on wishful thinking. You must handle every case, even edge cases, and I was just explaining my opinion for an acceptable solution to these edge cases. I think a stunt driving powerslide that avoids hitting anyone beats braking and killing somebody, but that’s just me.