No, I get all that. But from an enlightened self-interest viewpoint, I think you’re wrong. In the abstract, I propose a law: what if we set up a system where we kill one person painlessly and in exchange save five people from a painful death? Not knowing which side of this equation I’d be on, I only know that I’m likelier to be on the “getting saved” side, so self-interest means I’d opt into that system. It’s only if you put your thumb on the scale, by assuming you’d be one of the killed people (or that you wouldn’t be one of the saved people), that enlightened self-interest leads to a different result.
This is one of the reasons that I think enlightened self-interest is a pretty messed up ethical approach.
My question is: is this a single-shot thing, where I’m either the one or one of the five? Or are we going to keep doing this, day after day after day?
Agreeing to kill someone so we have better odds of living in this world is one thing. Agreeing to kill someone so we have better odds of living in the chop-chop world is a different calculation.
I don’t know that it matters. In either case, you’re five times likelier to be saved by the scenario than harmed by it, unless you know in advance your health status.
Run the numbers. Let’s say, in a population of 100,000, that there’ll be 0.1% of the population that needs organ transplants to avoid gruesome death, over the course of a hundred-year lifespan. That’s a hundred people. That means, over the course of those hundred years, you’ll need to kill 20 people to supply the organs. If you’re considering this system as an ongoing approach, your odds are better with the system than without.
My point is: I don’t want to live in a world where people are allowed to chop me up. I vote accordingly; other people seem to do that as well, which explains the laws on the books. I want to live in a world where I go about my business — chopping no one up — and, if anyone tries to chop me up, society steps in to stop them while relaying my hey, that’s not okay message.
Yes, if you tell me I’ll be one of the six in a one-time scenario where I might get chopped up or I might get away with chopping a guy up for my benefit, and let us never speak of it again, and I go back to living in this here world, that’s one thing. But if you’re telling me we’re deciding on a new rule for society — that this isn’t going to be a one-time scenario, but that I’m now casting a vote on whether to usher in an ongoing Can-Be-Chopped-Up-At-Any-Time world that I’ll be living in — then things seem to be different.
The logic he is nonsensical to me. A huge part of writing Computer code is recovering from a situation where errors have happened.
This code has absolutely already been written in a self driving car somewhere. If its only how much weight is given to an outcome where a pedestrian is killed versus the occupants of the car during training. Which totally has happened and that right there is programming the car to solve the trolley problem. In the situation where the car has to choose between running over a bunch of pedestrians and hitting an oncoming truck it will, think carefully (as in evaluate a neutral network) and decide between A and B.
In fact I wouldn’t be surprised if the lawyers insist that it isn’t left to the whims of a neutral network and have more explicit code to decide what to do if it detects a situation like this.
I’ll bet it hasn’t.
Algorithms in charge of life-or-death machines have been in place around the world for decades. They never weigh up “kill this person” versus “kill that person”, because for one thing it would be a legal nightmare. They just have general principles, and when all else fails shutdown / cool off / quench. The driving equivalent is “if there are no swerve paths, brake as hard as you can without skidding”.
Accidents have been more like things such as “the glare of sunlight on a car made the AI think there was no object there” – not the solution to a dilemma. Self-driving software is just not up to the task yet.
My point is that a good human driver will slow down in more constrained situations. If the lanes either side of me are blocked with queues of cars, I will slow down, even though my lane is completely clear, because I have no “out” if a car were to veer in front of me.
I simply shouldn’t be in a situation where the only options are to fatally hit someone or crash at a speed that’s likely to kill me.
I don’t think either of your hypotheses are as inconsistent with utilitarianism as you think they are. I am a utilitarian. I also think the trolley problem is garbage because, like so many overly-simplistic quasi-moral problems, it encourages us to reduce utilitarianism down to simply evaluating how many might live and how many might die, in a vacuum. As if there is no society to speak of. As if individual agency and due process—the fundamental right to justification if one’s agency is to be impeded—are of no weight. As if it would have no effect at all on the emotional well-being of humans to be reduced to mere numbers of lives. As if the emotional well-being of humans is of no weight at all. As if it’s just a question of how many people die in this one instance, not how this kind of decision-making could poison society in general and make life worse if regularly applied.
But when it comes down to it, while I reject the Trolley Problem (because I don’t see it as a particularly useful basis for discussing human morality), I can’t reject utilitarianism. Because if the alternative is that we adopt or reject rules regardless of how useful they might be, then what is the actual point? I have no use for a rule that is not useful to actual humans living in a society. Why should anyone else?
The problem with the trolley problem is that it does not yield useful rules because of how simplistic it is relative to moral decisions we actual humans might be expected to make in the real world, and the consequences thereof.
Everything you say there is true; it is also a feature, not a bug.
Again, the trolley problem is a thought experiment meant to compare two things. Outcome, and direct involvement. Is taking the direct action of killing someone worthwhile if it saves a greater number of lives?
Everything else you mentioned is super duper important when making a real ethical decision, which is great and true and fantastic. But right now we are trying to compare two factors, A and B, and so have come up with a contrived situation that allows us to only have A and B be relevant so we can try to think about which is more important. And your response is “What about factors C through Z?”. Yes, those factors matter in the real world, which is precisely why we aren’t using a concrete example where all those factors matter, but instead a super simplified one.
I, too, would prefer to take my chance of dying of kidney failure and not have to worry that i will randomly be called up to be cut up for the benefit of others. Even if there’s a much larger risk of dying of kidney failure.
I think you’re right that we have a stronger aversion to killing someone—that is, taking some action that will result in a person’s death—than we do to allowing someone to die by not taking some action that we could have taken, and that this is a factor in the trolley problem.
Would one of Asimov’s robots weigh these equally? Asimov’s first law says “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Would a robot faced with the trolley problem pull the lever? Or would it be torn because it would have to violate the First Law no matter which choice it made?
The self driving car legalization question is probably the closest any of us will get to an actual implementation of the trolley scenario. There will come a time, it might even be now for highway driving, when autonomous vehicles are safer (kill fewer people) than human drivers. But they will kill different people. And we’ll know that, because we will see videos of a car killing a person in a way that no human driver would ever do. And the general public will watch that video and be appalled. “My 16 year old who just got his license wouldn’t make that mistake of he were drunk.”
At what point do we legalize autonomous vehicles on the road? At what point do we require them (except in a list of defined uncommon circumstances)? How much evidence do we need? How viciously will juries punish the owner/manufacturer of the car that DOES kill someone “new”?
These are real policy questions that are closely related to the trolley problem.
If I remember my Asimov right, the answer is likely “yes, although there’s a good chance it will fry its own brain when doing so”.
Or, if it’s an advanced enough model [Caves of Steel series spoiler], the robot might derive the existence of a ZEROTH law:
A robot may not injure humanity or, through inaction, allow humanity to come to harm
In fact a robot uses the Zeroth Law to justify setting off events that would eventually make Earth uninhabitable, because he believes it would push humanity to expand into the Galaxy.
Did you mean “the 1 person currently safe, the 4-5 about to be run over if you do nothing”?
Another variation is if the people who are about to be run over are the SAME race, while the one person is a different race.
Assuming one has enough time and presence of mind to react, ideally the same reaction / thought process would occur and you’d save the group at the expense of the one, but your version (save the group that is different, kill the one that looks like you) might result in fewer switches being pulled than my version (save the group like you, kill the different one).
They didn’t used to! Programming a motor controller to cut off if the resistance goes above a threshold is not programming a computer to solve the trolley problem. But the neural nets that control self driving cars are trained and the training data is weighted. So there is a training data for hitting a bunch of pedestrians and training data for hitting a semi truck (100% for sure this exists, oncoming semi trucks and bunches of pedestrians are common situations you need your car to handle). The weights assigned to those situations will decide whether the car will choose trolley problem solution A or B.
Absolutely which is why I suspect they probably have a who different logic when they detect a situation like that was decided by a lawyer. Regardless though it’s still a trolley problem choice even if it’s “just brake and don’t swerve one way or another”.
Absolutely but that doesn’t mean there isn’t code (at least in the form of a trained neural network) there to solve the dilemma if it happens.
It’s rare that you’ll end up in that situation as a responsible driver, that doesn’t mean it can’t happen, the fact irresponsible people are more likely to encounter trolley problems than anyone else is irrelevant. The trolley problem says nothing about whether you are an innocent bystander or if you decided to drunkenly take a trolley on a joy ride. No one is suggesting it should be included in the driving test, it’s clearly a very unlikely thing to happen and not something you need to practically prepare for. Though that doesn’t mean it never happens I’m sure of all the accidents that happen every year all over the planet there are a few exact trolley problems, where some poor sod is in a situation where they have only two choices, kill one person or kill a bunch of people.
Though again this shows the difference between IRL Trolley problems and the thought experiment. In the thought experiment it is a given that there is a 100% chance of killing 1 person in option A and 5 in option B. Real life doesn’t work like that. Instead in this example the self driving car company is telling you 10,000 people will die if you choose to legalize then now and 50,000 people will die if you don’t (or whatever the numbers are). If they are correct there is no dilemma, of course legalize now. But there is no way of knowing if that is a correct assessment and the car companies are gonna make a bunch of money if you choose option A.
Another problem or benefit depending on your POV of using the trolley experiment to judge morality is that it only works because it’s so structured: no permutations of survivability, no other options to take, VERY high stakes, VERY high consequences.
But if you abstract it a few levels, or reduce the stakes, or the consequences, the answers for the individual turn out quite differently. As an example, in several current threads we’re talking about NOT having voted for Hilary during the 2016 Presidential campaign. At least one poster (no names, because that’s NOT what we’re talking) indicated they didn’t vote for Hilary because they weren’t progressive enough.
This is a kind of trolley problem writ large: do you vote for someone you don’t fully agree with / respect / etc (actively participating in a wrong) or refuse to vote / vote third party which almost certainly can cause a much greater amount of harm but leaves your hands clean? Of course, if you want to avoid the “That’s 20/20 hindsight” you can just apply the same question to people considering their 2024 options. There are plenty out there (not sure if any on this board though, not that it makes a difference to the argument) who are strongly considering write-in candidates or not voting for Biden due to XYZ issue (most frequently Israel/HAMAS) despite the absolutely knowledge of what a new Trump administration has already promised to do.
I think the point in the more abstract, real world situations (including self-driving cars) is that humans find it much easier to bear a tiny wedge of responsibility if it is shared among countless others. But if it all ends up in your hands, and you hand alone where the cognitive dissonance comes in.
So not just (IMHO to be clear) the conflict between differing moral judgements, but the forced acceptance of responsibility itself muddies judgement.
For me, the important thing to keep in mind is that I am NOT responsible for the 5 deaths, the evil villain who put me in that predicament is the person responsible for the 5 deaths. If, however, I pull the lever, I have made myself responsible for 1 death because I have chosen to kill that 1 person.
The number is irrelevant because the number of people the evil villain has chosen to kill is HIS responsibility, not mine.