Last night I was doing my reading for today’s class, and for whatever reason, there is an ethics hypo thunked down in between a couple of tort cases. I’ve been pondering it since last night, and I respectfully submit it to the SDMB.
Part One:
You are a trolley driver. As you crest a hill you see before you five people on the tracks. You pull the brake lever, but, since this is an ethics problem, the brake doesn’t work. Moments before impact, you notice a spur track. You can throw a lever on your trolley and take the spur. The problem however, is that there is one person on the spur track. Doing nothing will kill five people. Throwing the lever will kill one person. What do you do?
Part Two:
After you retire from trolley driving, you become a transplant surgeon. In fact, you become the best transplant surgeon in the world, and since this is an ethics problem, you can assume that your surgeries are always 100% successful. At your hospital are five critically ill patients, each needing a different organ (no, they’re not matches for each other). A healthy man walks into the hospital for a routine checkup. You come to discover that miraculously, he is a perfect match for all five of your patients. Taking his organs will kill him, but will save five people. What do you do?
Most people choose to take positive action in the first scenario and to take no positive action in the second. Here’s my question: Why is this?
Note: this isn’t a new problem, and in fact, it’s a famous one: The Trolley Problem, thought up by philosopher Philippa Foot. Don’t read the wiki until you’ve thought of your own answer. You’ll spoil the fun.
Another thing is that (And I already read the wiki article before, so oh well) in the first scenario, the one guys death is an unfortunate side effect - If he weren’t there, that track could be used to save the other five people. In the second case, the one guy’s death is necessary to save the other five.
We went through the first one in law school. In the first scenario, the five people would die due to an accident, but the one person would die specifically by your hand. In that sense, it’s no different than the second example, but for some reason it feels psychologically different. And I still can’t figure out why. The five people in the hospital would die through a trick of fate no fault of their own as well. But I would throw the switch in the first case, and would never kill a man to harvest organs. It’s bizarre.
I think its a problem of the artificiality of the question. It’s a make-believe world where your surgery skills are guaranteed to be 100% effective. If you really immerse yourself in the artificiality of this make-believe world, then you should choose to harvest the healthy guys organs in the hospital in order to restore the 5 sick people to health - in exactly the same way that you would choose to do least harm in the first case by killing one person instead of 5.
The trouble is, what we know of the real world makes it difficult for us to immerse ourselves in the artificiality of the hypothetical. We know surgery is not 100% effective, we know that transplant patients are often not restored to full health, etc.
Can you harvest the organs of the people you kill in the first scenario to save the lives of the people in the 2nd?
Scenario 1: Throw the switch and start screaming, “Get out of the way, you fucking idiot.”
Scenario 2: I don’t think it is physiologically possible to have 5 people, none of whom are matches to each other, all of whom are matches to a sixth. My answer would be to say to the lab tech, “Run these tests again, you fucking idiot.”
We did this last semester in my Torts class. The answer is there is no answer. Try as you might, don’t fight the hypo. It’s like Drain Bead said; it’s bizzare.
We talked about how in one case you’re taking affirmative steps whereas in the other you’re nonfeasant… but it’s still… so… frustrating!
It’s the trolley driver’s decision in scenario #1.
In #2 the decision must necessarily default to the healthy guy volunteering. I recall that a kid killed himself so that his best friend (a girl) could get his corneas or something.
If it’s the case I’m thinking of, it’s a guy who shot himself in the head so his girl friend could have his heart, without even knowing if there was a match. She was running out of time and was a fair distance down the donation list.[sup]1[/sup]
He was a perfect match, and she got the transplant. Always wondered how she felt about this.
[sup]1[/sup]The case I’m thinking of happened in my home town while I was in high school, although neither of the kids involved went to my school. Long, long time ago.
If you move both hypotheticals more into the realm of the ‘real’ world…
In the first hypothetical, you have only two choices (as given in the example) flip the switch or do not flip the switch. Whichever one you choose, someone will die, so then it is a simple math question – is it better to kill more people or less people.
In the second hypothetical, you have two choices again, but this time one of the choices you have is to not kill anyone. (i.e., you can harvest the organs of the healthy guy or not), so here there is a clear difference from the first example. The 5 people dying are dying of natural causes, so you are not morally responsible for their deaths. If you kill the healthy guy, you are morally responsible for his death.
So, like I said earlier, the hypothetical (and impossible) condition set up in the problem is that you are perfect surgeon and that you could restore 5 people to perfect health with 100% certainty, by killing one person. According to everything we know, this condition is beyond the scope of our experience with the real world, so we cannot apply the condition easily.
Isamu, what you are doing is the definition of ‘fighting the hypothetical.’
Ah, you’ve got it wrong. If you don’t flip the switch, you aren’t killing anyone: the trolley is. If you flip the switch, you involve your own agency in the situation. If you flip the switch, YOU have chosen to kill someone. If you don’t throw the switch, you are letting fate take its course.
But the five people in the trolly example were going to die of ‘natural’ causes as well. You are not morally responsible for their deaths either. In both cases, the hypo has been carefully laid out so that you are not responsible for the deaths of either set of five people.
But you are morally responsible for the ‘one’ guy’s death on the trolly tracks too, because you are the one to throw the switch.
One version of the hypo that you might like places you, the surgeon, in a room alone with a glass of water you know the healthy “donor” will drink, and a vial of poison that is 100% fatal, 0% traceable, and has a 0% chance of damaging any organs. Do you put the poison in the drink?
But remember: there aren’t any real people or real surgeons here. They’re all hypothetical.
::chortle:: If the hypo ever comes up in class, I’m using this.
It seems that in the second scenario, the surgeon is analogous to the trolley, and the matching patient is analogous to the trolley driver. So, in case one, the actor chooses whether or not one or five people will die, through the agent of the trolley. In the second, the actor chooses whether or not one (himself) or five people will die through the agent of the surgeon.
[q]Ah, you’ve got it wrong. If you don’t flip the switch, you aren’t killing anyone: the trolley is.[/q]
Awesome. But please don’t apply for any public transport driver positions in my town. I’m not sure how well it would go over if, when the bus brakes failed, you just took your hands off the wheel covered your eyes and said “c’est la vie! Whatever happens now is the bus’s fault!”.
If this intrigues you, you might be interested in the book Moral Minds by Marc Hauser, which contains a good section on this and other ethical quandaries.
I guess they’ve done research on this stuff, and the interesting thing is that the emotional ‘‘gut’’ response to this question, whatever it is, happens first, long before the brain rationalizes it.
Hauser’s argument is that humans share a universal grammar that decides these things long before our rational minds enter into the picture. His argument is, essentially, that morality is not rational, but rather naturally acquired in the early stages of brain development in the same way that language is.
A very well argued and interesting and very thoroughly researched book. I’m pretty sure he had an article in Discover too. Would recommend his work to anyone who finds this stuff fascinating.
Throw the lever. The deaths of five outweigh the death of one.
Taking the situation totally deprived of context; kill the dude, share the organs. Same reason. In context, doing so would mean my arrest and the deprivation of the world’s only 100% success rate transplant surgeon. Plus other doctors, nurses and so on would likely stop me before or after I kill the guy. In practice it wouldn’t work.
You can get around this objection with little difficulty. Just assume you have the opportunity to kill the healthy patient and make it look like an accident and know that he’s an organ donor. So you’re back to the original question: do you sacrifice one innocent person to save five others?