Which will be possible as soon as we’ve nailed the trick of making everyone happy all the time, and the perpetual motion machine. The fact is cars, whether driven by humans or computers, will kill people. Especially when computers have to drive around cars (and humans) piloted by unpredictable humans.
And because of the nature of computers they will be able to make logical, split second decisions over which humans to kill.
Here is a compilation of Teslas avoiding crashes. In many of these examples a slightly different layout of the vehicles or road would have resulted in a “trolley problem” decision.
The wild pig at around 1:10 is particularly telling (incidentally it clearly hits the pig, so be warned if you don’t want to see that). The car swerves all the way in the oncoming lane to try and avoid hitting the pig. If there was a car coming it would (I assume) have decided not to and just hit the pig. That is the trolley problem.
The fact it did hit actually the pig might be due that it is assuming it is a human so only predicts how far it will travel based on human speeds, not pig speed.
Well let’s put it this way: all the hypotheticals I’ve heard up to now rely upon the driver making a poor decision to get into the situation in the first place. So if real dilemma situations happen, the authors of these various articles and thread topics have made a poor attempt at finding such examples.
No, still disagree. Let’s put it in more detail.
I’m driving on the highway at 65 mph. This would be crazy fast on a regular road, but on a highway, with extra wide lanes and very gentle bends, it’s a very comfortable speed. I have been seeing signs warning of an exit coming up for a number of miles. Finally the exit is approaching and then I see there’s a car parked in the hard shoulder, right by the exit.
My first instinct is to change lanes; I don’t want to be passing a parked car at these speeds, especially next to an exit. But I check my mirror and see a truck in the lane beside me. I have no option but to stay in my lane but I slow down considerably; say to 45mph, as is required in the highway code. This is not a crawl, but it feels like it on the highway.
Meanwhile what’s happening with the truck? As long as we’re within a quarter mile of the exit, and he’s anywhere near me, I’m going to keep an eye on him. I need to be prepared in case he comes across my path. However the fact that I just decelerated considerably makes it effectively impossible that we could end up side by side without him seeing me first. But if my current speed would see me lined up with both cars at the same, or approximately the same, time then I adjust accordingly; I’m not going to box myself in.
If the truck does come across the front of me, since I’m driving at what is a very low speed for the freeway, I’ve got plenty of time to choose a safe path or slow down yet further, since, like I say, highway lanes are huge.
A sensible driver in this situation simply does not invoke the dilemma.
Ok, you’re approaching a highway underpass. An entire troop of boyscouts are hiding out of view behind the bridge pilings. As soon as you enter the constrained part of the underpass, where there is no shoulder and your car must choose between 2 lanes, the boyscouts jump out in front of your vehicle. There are several scouts of various body weights standing in both lanes.
Humans can jump this far, humans can hide where sensors can’t see them. We can re-create this scenario in our simulated testing environment (and should).
What would you expect the software to do in this situation?
In the example above all those things would have happened at the same time. You would have seen truck and the parked car/motorbike at the same time, and no where near soon enough to slow down that much (if you are slamming on the brakes every time you see a car in the hard shoulder that is a bigger safety issue than AI ethics). That truck pulled through two lanes of traffic without looking, I am not sure a human driver would have even seen it in time to react at all.
Numerous other examples here. E.g. at 53 seconds it pulls onto an off-ramp to avoid hitting the car in front. What if there is a car or motorcycle in the off-ramp?
How so? The truck suddenly turning is the only sudden thing. The exit, the parked car and the existence of the truck you know about a long way in advance.
Who said slam on the brakes? I said slow down, as is required when passing a parked vehicle on the highway.
We can create a million examples of an avoidable accident. We can also create a million examples of an unavoidable accident. And that’s what good programming is, Trying to come up with the “right” action in all input cases. Creating a case that avoids the dilemma doesn’t address what you should do in the case when there is a dilemma.
“Humans can jump this far” is easy to say but we actually have to put it together in an scenario. If the opening is very narrow then I’d be slowing down. If it’s very wide then a human would have to really be going some to be able to run out without me seeing them first. Either way I think a human would have difficulty completely catching me off guard, even on purpose.
But in answer to your question, in the event that the software cannot swerve to avoid *any *collision, it will just brake as hard as it can. There is not going to be some algorithm saying Driver > Fat kid > Thin kid
(Although with how much this dilemma gets discussed, I think car manufacturers will have to do *something *to appease the people worried that AI might “prioritize” pedestrians’ lives over the occupants’. It doesn’t make sense to write software that deliberately weighs lives, but manufacturers will at least have to at least pretend to for a while)
Has anyone actually encountered the trolley problem in real life? With real trolleys or a car or some other vehicle? I just don’t think this is a scenario that comes up. Sure, metaphorically, but that’s not what I’m talking about. I’m talking about literally steering a large object between two definitely fatal paths, the only difference between which are the number of expected fatalities.
If this isn’t something human drivers ever have to do, why do we expect robotic cars – which are supposed to be many orders of magnitude safer than us skittish meatbags – to encounter this problem with enough regularity for it to become a major ethical issue for its programmers?
It will be detailed out as something like
If no safe path then
Driver> others
Living object > inanimate object
Human > animal
Small object> large object
And hundreds if not thousands of other considerations. But you can’t responsibly program the system without putting in in there in one abstraction or another.
Pretty sure there already is. What you do is you weight the density of an object times the speed squared, times the probability gradient of the object, and you choose the path that has the least impingement.
A person has a lower density than a lamp post or a bridge piling, so right there it’s a better impact target. If a kid is slightly farther away, hitting that one is better because the speed squared term is smaller. Skidding across in a diagonal to maximize braking distance is also better. So no, your algorithm is suboptimal and I’m glad you’re not employed in the autonomous car industry.
Because that’s what programming is. You have to give it a set of instructions of how to deal with every case of input data possible, before it hits the road. And every case you don’t think to put it will still result in something happening as well. It’s an ethical problem because it can happen, and you have to make a decision for it ahead of time rather than kicking it off like the Trolley problem and saying “It will never happen to me so I don’t have to make a decision”.
It doesn’t matter what the car does if it has to choose between killing the occupant and killing those outside of it. It could kill both and self driving cars would still be vastly better than the alternative.
The true “trolley problem” wrt self driving cars isn’t “passenger vs people in crosswalk” it’s “30,000 traffic deaths a year due to human error” vs “much much less than that because computers are better at driving.”
The point is that humans hardly ever encounter the trolley problem, or at least they do encounter the trolley problem but are required to react so quickly there is no way in hell they are going to consider the ethical implications of what they are doing. I am sure there a plenty of people out there who lived to regret the fact their “reptile brain” said “swerve” and as a result they caused other people’s deaths. Many of those people would probably have taken a different path if they had the luxury of carefully considering the implication of what they are doing. The “trolley problem” WAS a completely hypothetical thought experiment, not a real problem for human beings.
But for a computer that fraction of a second is plenty of time to run an involved analysis of what they should do. And the people who write that algorithm have all the time in the world to consider the ethical implications of it.
Have we established what humans should do in the scenario?
And if we have established that, have we also established that if humans do the wrong thing under the prescribed circumstances, that they would face substantial penalties for their poor choice; and had they chosen the right thing, that they would be free from penalties?
Just because you can come up with an algorithm that does “whatever a human typically does” doesn’t make it optimal. An optimal solution is the one that does the least damage, and if it must do damage, it prioritizes. Smoothly and cleanly. It also is aware of the uncertainty from it’s own sensors and vision systems and doesn’t take unnecessary risks. But, on the other hand, does take risks when the math is in favor of them. (it’s better to floor it into cross traffic if the risk is smaller than the risks of getting hit by an 18 wheeler approaching from the rear with failed brakes)
Yes, it should do what causes the least damage. In this case the thing that causes the least damage is whatever gets self driving cars adopted the fastest regardless of what it actually does in a trolley problem scenario. If people won’t buy them if the car is programmed to kill the driver in that case, then the cars should be programmed to plow through whatever crowd is in their way. (1 crowd)*(# of trolley problems per years) << (# of people that die in accidents due to human error per year).
This is actually the bit of this that I do actually find a bit worrying. I don’t actually think developers are sitting round stroking their beards discussing normative ethical theory. I DO think that lawyers are asking this exact question with respect to both civil and criminal penalties. And what they consider the best way to reduce “harm”, based on that answer, has probably has very little to do with ethics as most people know it. Should it favour hitting a girl scout over a rich father? As the resulting damages will be less.
Basically, instead of thinking of the problem like a series of edge cases, and then insisting that some edge cases are so unlikely that they will never happen in the next 30 years with hundreds of millions of autonomous vehicles on the road, just look at the algorithm.
You want to get the outcome of :
minimize damage to the occupant of the vehicle
minimize damage to other people
minimize damage to the vehicle
minimize damage to animals outside the vehicle
minimize damage to objects outside the vehicle
minimize violations of the law
Obviously, though, these objectives conflict. For instance, crossing a double white line even briefly to avoid hitting road debris that might puncture a tire is a violation of the law. But if there is no traffic in the other lane for a long sight distance, this is probably a good path to take. If you’re really good, though, and have very precise modeling of where each tire of your vehicle falls and the distances between your undercarriage and the ground, you might be able to plot a course right through the road debris where none of your tires actually roll over anything that looks sharp. So that’s an even better path.
So one solution to these conflicting objectives is just hierarchical minimization. Find the path that has the best outcome for the first objective in the hierarchy. Consider it and paths that are only slightly worse than the best path for each objective down the list. That makes for a nice, clean algorithm that is likely to actually work. And the same code that decided to cross a double white line to avoid some debris when it was clear (but didn’t when it wasn’t clear of traffic) is the one that decides to bumper bump the skinnier girl scout who is farther away in the middle of the road, if that ever happens.
How do you define “work”? This is an ethical problem. Saying “always favor the people in the car to those outside it” is an ethical decision. It is not necessarily a unethical, bad or evil one, but it is an ethical decision. And one the car companies will have to defend in court (and the court of public opinion) the first time it results in deaths.
There’s another consideration : brand value. This is why I think “the driver always comes first” is optimal. It devalues your brand as a whole and drives consumers to competing brands if consumers know the driver doesn’t always come first. This dwarfs the exact cost from any single lawsuit.
And the best we can do with present computer vision tech is differentiate between “probably a human” and “probably an obstacle”. We cannot currently assess someone’s approximate net worth or age at the present time, from the camera of a moving car that has finite mobile computing power and only a fraction of a second to decide.
Note that this particular object detector/classifier is state of the art and is more advanced that the solution Waymo is likely using at the moment. It also happens to run fast enough to use in an autonomous car, processing hundreds of image frames per second with GPU hardware.