How would you respond to the trolley problem?

In the real world AI vehicles are still going to require insurance. Even if AI reduces the number of accidents and fatalities, when they do happen there will be investigations and when AI is responsible, the owner of the AI car will pay their share which will be covered by insurance. It’s not going to be a situation where people just shrug it away because it’s AI.

Which is why almost certainly lawyers were involved in coming up with the weights. Though that doesn’t necessarily mean they don’t choose “always favor the life of the occupants of the car” from a liability stand point that might be the less risky option?

Moderating: Drop the off-topic AI and weights side conversation please.

Shouldn’t Trolley Problem have been a disaster movie in the 1970s? Airplane, train (Cassandra Crossing), skyscraper, boat. Even a parody with The Big Bus. Why not Trolley?

A real life example would be a lone gunman is either going to take out 5 people or you can push one person into them knowing the person being pushed is going to get shot.

How is that a real life example?

Exactly! Either switch decision could be the wrong one. There is no way to determine the relative value of any of the 6 people on the tracks. Sometimes no decision at all, is the right decision.

I’m pretty sure that’s my real answer if i somehow found myself in this bizarre situation. And also, i don’t know where the switches are, how to operate them, or exactly what they do.

For the sake of the hypothetical, pretend you do. Why are hypotheticals so difficult here on the Dope? Honestly, I’d probably let the five die. Why? I don’t know, but given this situation and accepting it as is, I doubt I could bring myself to make that utilitarian calculation to save the five from direct action than kill the five from indirect inaction.

Pretend i wouldn’t freeze in indecision? Then i wouldn’t be me.

No, the bit about the switches. Pretend you know exactly what they do.

Actually, for that matter, pretend time pauses and you are given as much time as needed to make a decision. This is what this hypothetical is getting at.

That party was just about how contrived the scenario is. My real answer was at the start. “I’m pretty sure I’d freeze”.

The question isn’t what would you do, it’s what should you do.

Are you sure it’s not “How should you be judged?”

What makes that “right” though? Why is the correct answer to do nothing and allow 5 people to die to save one? Would it still be right if it was 20, 100, 1000, what if it was a vat of weaponized smallpox that would kill billions?

Again given the conditions of the hypothetical where you know with 100% certainty that you only have two options (pull lever or do nothing) and what the results will be. What is the logical reason that makes not acting the correct opinion?

“No decision” is simply not an option for a conscious, mentally stable adult human. Every instant we make decisions, and when you’re aware of the problem, choosing to look away is a decision.

So sure, it might be the correct decision, but it should be understood as a decision: you’ve opted to take actions that are predictably followed by the death of five people, when you could have taken other actions that would have predictably been followed by the death of one person.

If you have just woken up and haven’t yet decided to have eggs or waffles for breakfast, does that mean you’ve decided not to have eggs? Not to have waffles? To have both? Not to have breakfast at all?

Or does it mean you haven’t made a decision?

One of the many problems with the trolley problem is it presumes certainty, and prompts actors to get accustomed to making decisions of a moral nature as if their knowledge is perfect and their perception is infalible.

Again, the trolley problem is not fit for use on or by humans. It bears an at best farcical resemblance to the sort of problems we humans actually face.

This came up in a high school class in a different form. It was about kidney dialysis. You were given a list of a dozen people or so and their characteristics and you only had three available for dialysis. The rest would die. You were asked which three to choose.

One of the groups picked people at random, because they didn’t believe picking specific people was ethical. So random was the only solution.

I think it’s possible to argue that the five should die because killing the one is a decision, and it’s a decision someone shouldn’t be forced to make. So the trolley doesn’t get rerouted unless it can be done with complete safety. So a person isn’t blamed for “playing God.” The five die if they are on the original track, numbers don’t matter because if there are ever two choices there isn’t going to be a choice made.

Yes, we have to make choices in life, but there are particular choices we tend to avoid at all costs, probably justifiably so.

What do you mean by this?

If you mean that no human should be actually subjected to the literal situation ddesribed in the trolly problem, I can agree.

If you mean that no one should be required to answer the hypothetical question and then judged or blamed or held responsible for the answer they give, I’m inclined to agree with that as well.

If you mean that no one should ever even consider the question or think about what they would do, and what they should do, and why, then you’ve lost me.

Some people do have to make real-world decisions that bear some similarity to the situation described in (some version of) the trolley problem, but with more variables or less certainty. The certainty and artificiality of the trolley problem is both a bug and a feature. It’s like a simplified mathematical model of a real-world situation involving many variables that tries to isolate the effect of one or two of those variables, which can be either enlightening or misleading when you try to extrapolate its results to the real world.

Positive.