How would you respond to the trolley problem?

Yes, exactly! And that’s what we should do, with real-world problems.

Okay, am I being whooshed here?

Are you saying the best way to handle a complex real world problem that needs a decision is to argue about the decision and never make it?

Isn’t that a bit like insisting that math students learning algebra should only be given “real-world problems” that arise in actual situations, as opposed to abstract or simplified problems that don’t require special technical background knowledge of physics or finance or whatever and just focus on the problem-solving process?

You’re still missing the point.

The hypothetical isn’t meant to hone your skills at finding ways to stop an out of control trolley before it kills anyone.

The hypothetical is meant to explore the moral claim you made, like that “the correct response is the one that doesn’t kill anyone”. Why is that the case? What does it mean for it to be the case?

Exploring extreme and contrived hypothetical situations is a way to discuss these why questions.

Otherwise what will happen is we will argue vehemently over the best steps to take in a concrete situation, and it will turn out that we strongly disagree on the best actions to take (for example, if it turns out that you value minimizing personal culpability while I value minimizing total harm, or vice versa). The time to figure that out is before a stressful life and death situation.

“You have five apples. You give James one. How many apples do you have?”

“IMPOSSIBLE! I would never give that asshole James an apple!”

Paraphrasing Douglas Hofstadter: suppose the inventor of the automobile had said, “Hey everybody, I’ve got this great new invention that will revolutionize transportation as we know it! There’s just one catch: every day, it will kill enough people to fill a football stadium.”

I mean, I get where you’re coming from here but we did have to decide whether to drop atom bombs on Hiroshima and Nagasaki or continue with a horrifically deadly conventional invasion of Japan, that is very much a decision that had to be made. If there was a “don’t kill anyone” option for ending WWII then the sad news is that at the time it was not apparent to the people involved.

More recently, we collectively decided not to push newly developed COVID vaccines direct to human challenge trials, thus choosing not to risk the deaths of a comparatively small number of people rather than take the chance of saving the lives of millions through accelerated access to a working vaccine. That was an actual moral dilemma that was faced by actual people. Again, if there was an alternative way to accelerate vaccine testing without risking subjects lives, it passed us all by.

“I would simply be James T Kirk in the Kobayashu Maru test but for realsies” is a delightful moral philosophy, but, you know, best of luck with it.

Sure: physicians, every day.

The best follow-up question to the trolley problem is very much a real-world problem. You have one healthy person whose heart, lungs, and kidneys could each be used to save a sick person from death. Do you let the five sick people die, or do you kill the one healthy person and harvest their organs to save the five sick people?

The world is full of healthy people and full of sick people, and the medical technology absolutely exists to harvest the organs of unwilling healthy people in order to cure the sick people. But obviously that’s the premise for a horror movie: we repel at doing so.

Why? What makes this answer so different from the trolley problem’s answer?

I think that exploring the difference there helps us clarify our beliefs (around issues such as bodily autonomy).

My own moral code states that people share responsibility for the foreseeable outcomes of their choices. “Action” and “inaction” aren’t relevant concepts; it’s all about what choices you make, and what follows from those choices, and what outcomes you could have foreseen.

The actual trolley problem leaves no room to fix things or figure out anything. There’s no other information. No other options except pull the lever or don’t. It doesn’t happen in real life quite like that but other situations in real life are similar enough even when there is more time available because after having gathered all the information you can and looked into every available option eventually it can come down to a choice with no time left. Indecision is the same as action in those cases, you either take the path of least harm or not. I wouldn’t hesitate to act in favor of the least harm.

It is not supposed to have to do with real life, or even realistic morals, but it is a convenient device bad people use when they want to justify murder, genocide, or practically anything else via false dichotomy.

What kind of monster are you? Obviously you have to kill everybody, no more sick people, no more suffering, no more trolleys, problem (permanently) solved!

You’re not fooling anyone, Skynet.

You are the parent of conjoined twins. Doctors tell you with certainty that you can have them surgically separated and one may die. If you don’t take take that option they will both certainly die. What do you do?

That’s one of the real life situations to consider. The situation differs from the trolley problem but it’s not a false dichotomy used to justify murder.

How is a bystander who did nothing to set up the scenario guilty of anything because of inaction? A dangerous situation almost always poses a risk to the person who might intervene, and as a society we generally do not deem putting one’s self at risk for the sake of another to be morally necessity. It may be morally virtuous, but it’s not required. The person who runs into a burning building to save a baby is lauded, but the person who doesn’t go into that burning building isn’t and shouldn’t be castigated.

A similar scenario is the theoretical hostage situation. The hostage-taker hands you a gun and tells you to shoot Tammy or else he’s going to kill all five other hostages. You killing Tammy is wrong because you killed Tammy. You don’t know if the hostage-taker would actually kill the other hostages, or if the police would intervene before he had a chance. That’s the sort of nuance, like the fat man on the bridge, the basic trolley problem lacks. If you were to make them equivalent, the hostage-taker would set up some sort of automated guns on the other hostages that can only be disabled by Tammy’s lifeless corpse, but that’s so absurd a situation as to be useless.

I agree with those who say that stripping away any nuance from the scenario strips away the actual decision making and makes the thought experiment pointless, because it’s the nuance that determines right and wrong. A frictionless spherical trolley in a vacuum just doesn’t tell us anything useful.

It’s trying to use logic in place of morality. Logic, however, is two wolves and a sheep deciding what to have for dinner. That’s not moral. Harvesting one person’s kidneys to save two people with kidney failure may be logical, but it is not moral. Killing one person to save five may be logical, but it’s not moral. Sacrificing yourself to save five people, or even one person, may be morally virtuous, but it’s not a moral necessity.

OK, so let’s debate about those actual, real-world examples, then. Because they have all of their nuances and details, and those things are important.

The bizarre dark-comic political thriller science fiction novel Exordia begins with a similar real-world problem–or, rather, the protagonist is the survivor of the problem:

During Saddam Hussein’s campaign against the Kurds, one of his death-squads captures a village, and offers a young girl from the village a gun and a trolley problem. I forget the exact details, but it’s something like, for every hooded prisoner (and fellow villager) she executes with the gun, he’ll spare the lives of more and more villagers. If she executes five villagers, he’ll spare everyone else.

That’s what she does; he keeps his word; she’s exiled from the village and hated by all the other survivors.

So what is your moral decision if you are faced with the actual trolley problem? If it allows for four deaths instead of one in the those circumstances then it’s not a morality I care about.

It depends on ones ethical framework. I’d take the utilitarian approach and pull the lever, focusing on the greatest good for the greatest number. Thereafter, I’d try to convince myself that the Many Worlds interpretation of QM is correct and the dead guy lives on in an alternate universe.

Can we discuss why such things are “not moral”? And can we use logic in our discussion?

Ergo, logic is democracy.

Why?

I agree they are, but what does their importance consist in? Clearly they affect our moral judgement in one way or another. But how do they do that? What is that these details act on, such that a change in them can produce a change in our moral judgement?

For me, the answer is that these details interact with our moral principles, principles such as, for example, “human life is valuable”, “first do no harm”, “some are more deserving of life than others” etc.

We can, through real-life experience, empirically derive our moral principles simply by observing our own actions in a succession of greater and lesser moral dilemmas. But there is some value, i think, in examining and perhaps even refining those moral principles while we have the leisure to reflect on them rather than hoping we’ll make the right call under the stress of, say, a global pandemic.