How would you respond to the trolley problem?

I tend to disagree. By eliminating real world complexities and reducing the issue to an artificial level of simplicity, it allows us to focus on our reasons for making decisions.

You don’t see a problem with that? Not just as a moral matter, but as a factual one? For example, absent real-world stimulus, how can you be so confident that your reasons for favoring an action in the classroom would carry trough into real-world moral decision-making?

Facts may not care about our feelings, but it is a fact that we do have feelings, which cannot be so easily discounted in the moment as in a sterile classroom environment.

No, I don’t. I feel it’s a very good idea for people to think about their reasons for making decisions. A lot of bad decisions could be avoided if people looked at their decision making process and learned to recognize why they make bad decisions.

Well, I think a lot of really horrific decisions get made by people who have been primed to discount real-world complexities (including, but not limited to the influence of their own or others’ emotions) from their decision-making and just go with bare facts as presented to them, with no inquiry and no accounting for uncertainty or how the people impacted by a decision will feel about it.

I don’t know that I would call it a moral failure. We are hard-wired to protect our friends and especially family over strangers. The point of the trolley problem, as @Johnny_Bravo pointed out, is that it’s a thought experiment without a right or wrong answer. It illustrates the fact that what may appear to be a clear-cut right or wrong answer on paper becomes something very different in a real-world situation. What if the one person is the love of your life, vs. 5,000 strangers who have been convicted of horrible crimes? Is it a moral failure to choose your loved one then?

The point of the Trolley Problem isn’t to prepare you for the situation where you’re on a trolley barreling towards five people.

The point of the trolley problem is to ask, is there a moral difference between taking an action, and failing to take an action?

It’s like saying that things accelerate towards Earth at 9.8m/s^2. That’s true in a vacuum. The fact that it isn’t true in practice in the real world doesn’t mean it isn’t valuable to know how things behave in a vacuum; we can then add in other info, like aerodynamics etc.

People like that would really benefit from thinking about the reasons why they make decisions.

And maybe they could if they didn’t reduce things to something as bare as the trolley problem. Again, the trolley problem suffers from factual deficiencies as much as a moral deficiencies. You’re using an outrageously unrealistic problem to try and understand how your reasons for making moral decisions? Really?

ETA: Heck, think of that scene from Arrival, where the linguist is admonishing her scientist (and/or military?) colleague on the dangers of reducing language to a set of game pieces: you lose so much that things become muddled and even dangerous from the lack of fidelity.

I still disagree. I feel there are people who add unneeded complexities to a question in order to make the problem too difficult to answer. It’s their means of avoiding having to think about their reasons for doing things. They absolve themselves of responsibility by saying the situation was too complex to unravel.

Sorry, what makes you think emotion shouldn’t be taken into account in this analysis? It very obviously should. I come at this from a Utilitarian framework, and hurting people’s feelings is one way to decrease the utility they can derive from a situation, so (in a vacuum) that would be a bad thing to avoid. Unless hurting one person’s feelings saves another person’s life, or spares many people their feelings, etc.

Presumably it’s the same town where Monty Hall is still alive and opening doors with goats behind them.

So if I put a gun to your spouse’s head and tell you that unless you set the prison on Rikers Island on fire with all the prisoners inside, I will shoot her. Would setting the prison on fire and killing all the criminals inside (we’ll say the guards can escape) be morally correct?

Obviously if it’s the love of your life it would be incredibly difficult to make a decision that hurts them to save 5,000 people. I very well might not do it.

But that is not a good thing. If everyone in society valued the lives of people they are close to at many thousands of times the lives of people they don’t know, society would undoubtedly be worse.

Respectfully, I think you’ve misunderstood both thought experiments and philosophical dilemmas. In both cases they function only as illustrations. At most they point out a weakness or inconsistency in a particular view.

No-one’s going to use answers to the trolley problem to form public policy. It’s showing how people on average think today about an abstract problem. Not how we should think, and not about specific real world issues.

By removing real world complexities, it removes the decisions. The correct response is clearly the one that doesn’t kill anyone, and so a “thought experiment” that removes that response removes what makes the question relevant.

In the real world, when someone presents you with a choice between killing one group of people or killing a different group of people, your moral imperative is to ignore the options that someone is presenting you with, and find other solutions. A great deal of evil in the world has been perpetrated by people pretending that they didn’t have any alternative.

When the reasons for doing things are all in the complexities, removing the complexities is a way of avoiding having to think.

Well, first of all, let’s get things straight here. I said the love of one’s life, not one’s spouse.

Haha, if you read this sweetie, that was just a joke!

I’m not saying one is a good thing and the other is not. The point of the thought experiment is that there are no good, right or easy choices. There is no “‘A’ choice is always wrong and ‘B’ is always right”. What if it’s a choice between one person who is on the verge of creating cold fusion or discovering a cure for cancer, vs. 5,000 hardened criminals?

But if that were not part of the dilemma then someone could make the claim “There’s never a time where the most moral thing to do is murder an innocent person, or let an innocent person die that you could have saved”. These dilemmas indicate that that isn’t true.
That at the limit there are times where this is unavoidable.

These dilemmas only indicate that if they ever come up in real life. But they don’t: If they did, then people would ask about the real-life situations, instead of the artificial contrived dilemmas.

In one way, the comparison to physics thought experiments is apt. Einstein, for instance, had a thought experiment that started off with “What if I were moving at the speed of light alongside a light beam?”. And the conclusion of that thought experiment was “I cannot, in fact, be moving at the speed of light alongside a light beam”.

The purpose of a thought experiment is to suss out why the option that doesn’t require you to kill anyone is best. Specifically, it digs into whether it is more important to minimize the total number of desths, or to minimize your personal involvement in causing deaths. When these two goals are at odds, which takes precedent?

Sure, we could just ask you the abstract moral question, but that is incredibly difficult to actually engage with. You’d give your answer, and there wouldn’t be much discussion or exploration that could be done beyond that.

And sure, we could try to come up with a concrete real world example that poses the same moral problem, but almost certainly the outcome there would be that we spend hours going back and forth with people fighting the hypothetical and trying to find ways to dodge the moral dilemma entirely through increasingly creative and improbable solutions.

Instead of fighting the hypothetical by trying to find ways to save all six people, try to engage with the actual point of the hypothetical. What is more important? Avoiding personal culpability, or minimizing the total number of deaths?

Exactly. By your logic we apparently couldn’t ask this “what if” because he wasn’t actually sitting alongside a photon in real life.

I could not disagree more.

People who don’t think about how they would make a decision in a simple hypothetical situation are going to have no ability to make a decision in a complex real world situation.

People who are best able to avoid making decisions that kill other people are the ones who have thought about the decision making process and thereby established a set of moral imperatives.