How would you respond to the trolley problem?

Ooooh, what an absolutely vicious thing to tell the person who pulled the lever! Totally “evil-villainish”!! LOL

Is it actually my job to operate the switch? Or am I a bystander?

Am I in a position where I could yell at the people to get OFF the rail track?

If it is not my job to operate the switch, I would not operate the switch. If the people standing on the track could be warned, and they don’t get off the railway, not my problem.

Too many possible outcomes. What if one of the 5 people to be saved was Hitler? Should I save all of the other 4 and save Hitler too! What if saving only the one guy and killing the other 5 people and they were the wrong person to save?

Small actions matter, but you can never be sure that you have chosen wisely.

So I would do as many people in this modern day would do. Film it with my phone and post it on Youtube.

Sorry to be so flippant in my previous post, but there is really no way to determine the value of who should live and who should die. Are the 5 people more valuable than the one?

That is really the question, and no one can answer that.

That does make a difference. But IIRC in the trolley problem, tit is not your job.

Take out everyone on both tracks! Tough but fair.

With the initial trolley problem, I guess the question is how often is this happening? Apparently a lot. Apparently every day many people are being asked to make this decision.

Which raises another question, suppose I and someone else are within range of the switch and we differ on what to do. What then?

Here’s the answer. We have laws. If this trolley issue is a frequent occurrence, pass a law mandating that people switch the trolley to one with less people, and that they will be protected from prosecution in doing so. This, typically, is how we solve such scenarios that occur with any frequency at all.

Ummm… where is the paid professional Trolley driver?
Yep. Imma chicken.
I say it’s his problem. Not mine.
I’m going to the very back, get down low and hold on.

If the goalposts move any further you may as well just copy my original post and claim it as a refutation of my original post.

Remember, this is the original back and forth:

It absolutely hasn’t.
Yes, very obviously an AI in charge of a potentially deadly machine is making decisions that can be life or death, by definition. But trying to do the safest thing without being explicitly programmed to prioritize one life over another is what *I* was saying they would do.

I’m not saying less likely, my position is that it shouldn’t happen, period.
Yes, someone can crash into me. I might be stopped at traffic lights and someone rear ends me; not a lot I can do about that.
But a trolley-like dilemma requires the driver to get themselves in a position where they are driving too fast for the situation.

And before you say e.g. “what if someone rear ends you as a pedestrian is crossing in front”, say, the answer is as I said before; the AI would just brake as hard as it can. No-one is going to write code to deliberately run over a pedestrian nor would we release software that we know is going to make such a decision.

My point is, I’m taking the bold stance of being against murderers plying their trade by tying people to train tracks. If it comes up once in my whole life, I want that guy stopped, and, shucks, you can give me a medal if I heroically foil that guy, because, seriously, fuck that guy.

I, uh, don’t approve of what that guy is out to do, is what I’m saying. If it were legal, I’d want to make it illegal — but, of course, it already is illegal — and if you want me to discuss a trolley problem in that context, I’ll do it with a shrug. We, as a society should be against what that guy is up to; full stop; now let’s talk about something else.

And I’ll keep on keeping on like that even if would-be murderers pick up the pace by tying people to railroad tracks every month, or every day. Because, again, seriously: fuck those guys. I’m still opposed to what they’re up to. I still want to foil them, regardless of whether there are more of them. I’d still feel the same way about getting a medal for foiling them, regardless of how many medals I’d wind up getting.

If I keep getting asked about what to do if I can’t fight the hypothetical by foiling each railroader, just know that — whether it comes up once in my life or every week of my life — I’m always prefacing my ‘what to do about the trolley’ answer with at least an implied ‘let me just say that it is not okay for him to tie people to railroad tracks like that; and, as a society, we should work to stop him.’

And, if we switch to talking about cutting people open to steal their organs, I merely go from implying it to making it explicit: hey, I say, I don’t approve of doing that. I’d vote to make that illegal, if it weren’t already illegal. I’m taking the bold stance of being against that. I want that guy foiled. We, as a society, should be against what he’s doing; full stop.

As far as I can tell, it’s the same sentiment both times. Oh, sure, I don’t always say it out loud when talking about a guy who ties people to the railroad tracks — be it once a year or once an hour — but that’s because people don’t usually ask me about the guy tying people to railroad tracks when they ask about the trolley problem. But when we switch to talking about cutting people open, suddenly they ask about the guy who’s out to cut people open, and I say out loud what I was thinking before.

How is that moving the goalposts? You said:

And I was pointing out that those algorithms in charge of life-or-death machines for decades were things like “Programming a motor controller to cut off if the resistance goes above a threshold” and that is not solving the trolley problem.

When you train a neural network to run over the pedestrians or hit the semi truck you are explicitly programming it to prioritize it one life over another as explicitly as if you type

if(trolleyProblem) 
   runOverOrphans()

There is a little uncertainty in there, but it’s still explicit. You are assigning a weight to the pedestrians and to the semi truck. A weight in the mathematical sense, a number, probably negative, that says how much you want to avoid it (not a weight in terms of kg) That is a explicit instruction to favor one life(or lives) over another.

And there are cars driving round right now with a neural network that has that priority explicitly encoded into it.

This is a sidetrack to the OP (whether you are at fault or not isn’t part of the trolley problem) but it’s clearly not true. Sure by being responsible you minimize the chances of it happening but it can still happen: You driving round a blind corner at a responsible speed paying due care and attention. Around the corner a truck has inadvisably decided to overtake a group motorcyclists both are speeding towards you at lethal speeds, the roadway has impenetrable barriers on either side so you are either hitting one or the other. Hey presto, trolley problem. It’s not a super likely scenario but I’m sure it happens somewhere on the planet with some regularity

It’s really interesting to ask my sister and two best friends what they can see me doing. They said I’ve always been impulsive in an obsessively decisive way and would pull the lever with no analysis at all. My sister said “Descartes may have said the unexamined life is not worth living but I’d never hear it from you Eric.”

Surely the trolley company has signage warning people about this. Not my problem if people ignore the KEEP OFF signage.

When I took our Kubota lawn tractor to a Kubota dealership for repair, they re-wired it to defeat my attempt to defeat the onerous safety mechanisms. I was pissed off. He explained that, for instance the reverse awareness switch could save a child’s life.

I told him my kids were adults and smart enough to not be backed over. Indeed, any kid on our property would be trespassing and I don’t care what happens to them.

Apparently he lost a child to a reversing tractor. He got very upset. Once I got the tractor home I once again defeated all the safety interlocks so that the device is easily usable.

This is, for me, a major and fundamental disagreement–with you and with a lot of other people. Morality should not distinguish between “action” and “inaction.” The relevant criterion is “choice.”

In every moment that you’re alive and conscious and mobile, you’re making decisions. You’re taking actions. If you notice the trolley, and you have the muscle capacity to flip the switch, then you have the choice to flip it, or to scratch your nose, or to wave cheerily at the victims, or to pull out your phone and check the Straight Dope. Whatever you choose is an action, and whatever foreseeably happens after your action is a result for which you share responsibility.

Yes, the villain is the instigator, and bears primary responsibility. But if you choose to do something other than flip the switch, you have some measure of responsibility as well. And if you choose to flip the switch, you also have some measure of responsibility.

I believe that people avoiding responsibility for their choices by saying, “Well, that other person is really the one responsible!” is a major way that atrocities are rationalized.

Because the original claim was that someone would write code that knowingly kills someone, and now it’s moved to talking about a process that, in your view, is analogous to writing code.
It isn’t, and this is also the opposite of the meaning of “explicitly”.

But you know what, I’ll allow it.
Let’s say training a neural network to prioritize one human life over another is the same as writing code to do that.

Well fine, because that’s not happening either.
Like I say, if someone were to contrive a hopeless situation where the driver still has decision time (and that’s very difficult, as I’ll explain in my next post), the AI will be trained to just brake as hard as it can. It would be a legal and logistical nightmare to train a system that in some cases makes the active decision to kill someone.

Firstly, what speed is “paying due care and attention”? I mean if I am going round a mountain bend with barriers, and it’s such a blind bend I would not even be able to see a truck around the corner I would go really slow. Like 5 or 10 mph.
Could someone still hit me if they are travelling at crazy speeds from the other direction of the bend? I guess, hypothetically, but that’s just someone hitting me while I’m near stationary, it’s not a trolley problem.

Secondly, what are you anticipating an AI would do here? What would a human driver do?
Do you honestly think either would knowingly, deliberately plough into the cyclists, say?

What if what you become “responsible for” is a trolley full of passengers being late to their destination? Do you then choose to not hit the brakes and save the 5 people from death?

I mean, if our goal is to avoid being held responsible for doing a bad thing, this inaction would make sense. Yes, the passengers will still be late because of the grisly accident and subsequent investigation, but it wouldn’t be your fault, so it’s OK?

I expect you would hit the brakes and take responsibility for making everyone late, but you are now balancing the good you’ve done, saving 5 lives, with the bad you’ve done, making everyone late. For some reason, you (and many others) won’t do the same when the stakes are higher. It’s interesting how these things hit us all differently.

I don’t see how you can lead off by mentioning the importance of the trolley company putting up signs, and then end by saying you don’t care what happens to kids; isn’t the whole point of us not letting kids enter into contracts or consent to stuff or be tried as adults or whatever that we’ve all kind of agreed, hey, kids aren’t like us, in that it’s not good enough to merely put info in front of them and then smirk as they sign a waiver; doesn’t morality ask us to go a step beyond that, if compassion didn’t already get us there?

Kids do not trespass on our property. I’m not going to use a safety interlock to protect them, as they aren’t there.

Then I choose not to kill anyone. The villain chooses to kill 5 people. I’ll live with that choice because it is mine be it good, bad, or indifferent.