How would you respond to the trolley problem?

People find the Trolley hypothetical challenging because it forces us to consider our level of accountability for the outcome.

The Trolley hypothetical is a thought experiment designed to ascertain whether you would choose action with consequences or non-action without consequences (but still with a moral component). If you choose action, you pull the lever and change the fate of six people, different from what it would have been if you had not pulled the lever. If you choose non-action, you do nothing, and the accident proceeds as it would have if you weren’t even there.

Taking action has consequences for you. You may feel elation for saving the lives of five people or remorse for causing the death of one person—probably a combination of both. You may feel culpable if you take action, but not if you don’t.

There may also be legal consequences if the family of the person who died by your action sues you, or an overzealous prosecutor decides to indict. “But, Judge, I saved the lives of five people!”

“Yes, but they’re not the ones suing you.”

There is no right or wrong answer. Everyone sets their moral compass in a different direction, and a moral case can be made for either choice. I wouldn’t want to be the protagonist in this scenario, but if I were, I’d pull the lever.

I’m not sure that is the same ethical dilemma, the main dilemma there is to weigh up the unknown chances of reoffending versus the known cost of the sentence. If the defendant was 100% definitely going to kill a bunch of people if released, then even the most radical criminal justice reformer would say “yeah, we should lock him up”.

Of course almost all IRL Trolley Dilemmas have the same problem to some degree, even the exact circumstances described in the thought experiment. You could never (especially in the time it takes a speeding trolley to reach it’s victims) 100% say for sure that the outcome of choice is A is 5 dead and choice B is 1 dead.

Absolutely, I’m sure you can find plenty of counter examples. But in that case I’m sure Arnaud Amalric was absolutely convinced that the cost of the immortal souls damned by surviving Cathars spreading their heresy was greater than the cost killing the non-cathar population of Beziers.

My WAG is that most people would agree it is morally justifiable to pull the switch and kill 1 versus 5, but that in actuality, few would pull the switch. The wikipedia article says t hat in an experiment where people were shown videos, but told they were live, most did NOT pull the switch.

  1. you freeze in horror and are unable think coherently, or to act if you can think.
  2. The 5 people would have died if you weren’t there at the switch (let’s ignore the possibility that THIS means you had been negligent, by not being on the job). They were toast anyway. By pulling the switch, you are changing things and CAUSING someone’s death.

So in reality, I suspect I would not pull the switch.

In the example of self-driving cars leading to some number of deaths, where that number of deaths is less than the current expected number, I think the psychology is that ANY death caused by new technology is a brand new, excess death, and nobody thinks about the benefits by using that new technology.

Another example, one for which there may well be real statistics: Some people are allergic to certain antibiotics. Deathly allergic - as in, they go into anaphylactic shock and die. Most people who are given antibiotics will survive and fight off the illness - their immune system will eventually do its job, or maybe it wasn’t even a bacterial thing. Giving a person penicillin, for example, poses the small but very real risk that they will be allergic, and will die.

So let’s ban antibiotics, right? But that ignores the fact that “most people” isn’t “ALL people”; there are quite a few people who would indeed have died if they were not treated. I strongly suspect that this is higher than the number of people who would have died of an allergic reaction.

Makes sense. Good point.

I think it’s no less responsive to the trolley problem to Kobayashi Maru it than it is to flip the switch (or not).

Some treat it like the original timeline Kobayashi Maru (where the idea is to see how cadets respond to a no-win scenario—fair enough for a simulation) and reject the no-win scenario by seeking to step outside the problem and change the parameters (for example, @puzzlegal’s idea to find a solution where no one dies).

Others, like myself, treat it like the more disingenuous Kelvin timeline Kobayashi Maru (where we are supposed to believe that a mere simulation could ever hope to capture how one would react when confronted with their own mortality) and object to the very idea that such a simplistic test could provide any real insight into something so complex. Sorry, Commander Spock, but your test is bad and you should feel bad

Point being, sometimes questioning the hypothetical is more valuable than the hypothetical itself—come to think of it, isn’t that what some of y’all are insisting the trolley problem is all about, and why it’s such a great hypothetical to begin with? The discussion?

So that’s a good example how the certainty of the thought experiment doesn’t carry over into the real world. Sure in theory (particularly if you believe the self driving car companies) self driving cars will save far more than they kill, so deciding to allow them is a “kill one person but save five” choice. But in practice right now they don’t, they are far more dangerous per mile than regular cars, so it’s absolutely not as simple as that, it’s perfectly possible (maybe probable) that choosing to allow them earlier will lead to unnecessary deaths.

And here I thought I knew what the word “hypothetical” means.
Live and learn.

Though the hypothetical describes the facts of the situation itself, not what you know about it. Even if the situation was exactly as described in the hypothetical, when you find yourself in the situation you don’t know that.

Respectfully, you’re all approaching this wrong.

It’s the trolley opportunity.

Well, that’s disturbing.

To throw in a curve ball for those who would pull the lever, the 4-5 people currently safe are of a different race to you, the one person about to be chopped up is of the same race as you. You can identify with the one, but not with the 4-5. Do you still pull the lever?

I’d like to think I would.

Doing nothing means 4-5 people will die but we all do nothing every day and people die. Not my fault. Actively doing something to save those lives mean you commit murder. My instinct if in that situation would be to have the least people die.

Whatever decision I would make, race would not be a factor in any way.

Seems to me that I’m five times likelier to be one of the folks who got their lives saved with organs than to be the organ source. Wouldn’t enlightened self-interest argue in favor of chop chop?

I saw a cartoon recently, can’t remember where, where it was the trolley problem from the point of view of one of the people on the track. In other words, no choice to make, just hoping you didn’t get run over.

It was mildly funny, but it raises another critique: the trolley problem demands we imagine ourselves as the God of the situation, as the one person capable of making a decision that matters. It brings big Player Character energy, and I’m not sure that’s a good thing.

So what if we say, instead, “You’re gonna be one of the seven people in this scenario, sometime next year. What actions do you take between now and then?” That’s where I start thinking about how I can organize with the other folks to try to change train safety laws.

And that, I’d argue, is much closer to most of the real-world ethical dilemmas we face: the solution isn’t individualistic, but societal.

I’d argue that it doesn’t — for the same reason we all pretty much go along with this or that significantly similar modus vivendi in other situations. Wouldn’t it be to my advantage to steal what I want from a guy? But, if that were the law, people could steal stuff from me, and that’d be bad for me. Wouldn’t it be to my advantage if I could just murder anybody whenever I thought it’d be a good idea? Oh, but then someone could murder me, and that’d be bad for me. And so on — and so we kind of sell people on the idea of a society where, if you pretty much just mind your own business while not stealing from people or killing then, people don’t get to steal from you or kill you.

You could always cobble together a ‘steal’ or ‘murder’ proposition where you could get someone to say, yeah, just looking at from a purely self-interested perspective, I’d come out ahead in that one — but that doesn’t mean you’d go a step further and argue for institutionalizing as the law of the land, because (a) you lose out if they can attack you in turn, and (b) you figure you’re never going to sell people on letting you get away with it while not letting anyone else do likewise. And so:

That’s kind of my point from the post you quoted: that, when discussing the trolley problem, there’s an unstated “but, of course, we’re not okay with tying people to railroad tracks to murder them; if we find a guy doing that, we’ve all long since agreed to stop him, right? I take it we’re all on board with a societal solution against that guy attacking people in that fashion?” And, when we move to the chop-chop scenario, we suddenly need to state what we didn’t bother to mention when discussing the trolley: whether we want a society that okays us being attacked, or a society that bands together against that sort of thing.

I think it’s interesting to understand why our moral algorithms tend to push us away from utilitarianism in the trolley problem. Here are some hypotheses:

  1. “don’t take actions that kill someone” is an excellent rule to follow. It makes social interactions safer and more productive in any number of circumstances. It’s such a good rule that humans (a highly social and cooperative species) are probably born with it. It takes a really big push to overturn that rule. And that’s probably a good thing.
  2. we are conservative about taking important actions. Our default mode is “don’t do that”. This is also a strong tendency. My guess is that most of the people who say they’d pull the lever wouldn’t, if they actually encountered this, because they’d still be frozen in indecision when the trolley rumbled past them. This is usually a good thing, too. Because in real life we rarely know all the relevant details, at least right away. You see one person on this track and five on that, but you don’t see the flock of school children ALSO in the first track, but just around the corner. And you don’t know that the lever has bad UI and is actually pointing the trolley the other way from what it looks like. So you don’t immediately jump to action in a weird and unusual situation (and there’s no way to get around the trolley problem feeling weird). This is probably also a good thing.

Even with the case of a new medical treatment or self driving cars, it’s probably right to demand more than prediction(deaths new) < prediction(deaths status quo) because there are always factors we haven’t considered or don’t understand. Change is risky. But we probably ARE too conservative about eliminating new deaths at the cost of many existing deaths.

I’d opt to go back in time and kill a young Fred Rogers, thus ending this trolley problem once and for all.

I remain in shock that some here wish to question the whole notion of hypotheticals.

But, the self-driving car example made me at least realize I can relate a bit.

Because there is a hypothetical where a self-driving car needs to either kill a pedestrian, or crash in a way that’s likely to kill the driver. It was repeated all the time in the media during the period that full self-driving cars seemed imminent (2020?). Prominent technologists and AI specialists weighed in on it.
And it’s just garbage IMO. No-one’s ever going to write a line of code choosing who to kill. And if the car finds itself in a hopeless situation, it’s already made errors.

So sure, there can be situations where hypotheticals can be a bit of a red herring to discussions on real-world policy. I don’t think this applies to the trolley problem though, as I’ve never heard anyone trying to relate it to real policy, it’s purely an abstract discussion to test our implicit moral principles.

But i think that just as the car doesn’t pick, “kill a or kill b” we operate with moral algorithms, and they don’t work that way, either.

The trolley problem really is like an optical illusion. It creates a bizarre scenario where our algorithms didn’t work in intuitive ways.

I agree. The trolley problem highlights our moral algorithms in ways that maybe we weren’t consciously aware of, which makes it useful and interesting.

Meanwhile the car hypothetical really just deflects from how such AI would actually work.
Although in fairness, I guess we could argue that it’s a way for the public to be engaged with the reality of AI being in control of deadly machines. I mean, the real kinds of dilemma the AI would face aren’t going to be understandable or explainable in non-technical terms.
Still there were social and legal issues that would have been better candidates for discussion in my view.