Horrible Hypotheticals

Yes, we’ve had more than enough horrible hypotheticals on this board, so this isn’t one of them.
Instead this debait is a search for why some people believe there are somethings that must never be done, no matter what the results of not doing it may be. And to look at the moral decision in a way that doesn’t focus on what the effects are.
If given a button X to press, where button X causes terrible thing x to happen, but not pressing X causes terrible thing y to happen. How should someone make a moral decision as to whether or not to press X?

Cheers, Bippy, the confused

I don’t think it is possible to answer horrible hypotheticals in a meaningful fashion; one is seldom tested to extreme, so it is most unfamiliar territory and in such event one may actually act quite different from the ideal, furthermore, one may be paralysed by indecision and not act at all or attempt an ad-hoc solution (scenarios rarely catered for in false moral dilemmas)

I find it completely acceptable to answer extreme hypotheticals with wild speculation; giant squid all the way. Go Squid!

The values that other people hold might not always seem reasonable to us. Frankly, they don’t need to. If someone holds as a fundamental axiom of conduct that sufficiently unpleasant actions should never be performed under any circumstances, even if this means the death of all that is good in the world, what grounds do we have to say that they are wrong? No rational argument can topple a system of values that is free from internal contradiction.

[in a dark ghostly room with ankle-deep mist sweeping a chill across the obsidian floor, a wise elder Sensei gives Bryan Ekers a choice]

Sensei: If you press this button, young grashopper, a horrible fate will befall the people of Earth, but if you do not-

Bryan: Cool! [pushes button repeatedly]

Sensei: Hey! Stop that!

Thanks, I think Giant Squid are allways a good option, but don’t forget the Giant Octopuses (Octopii ?) as well.

I was also getting at the question, is there a moral difference between the action event (pressing the button) and the no action event (not pressing the button). Something inside makes me feel the no-action event is somehow better (given both outcomes are equally morrally repugnent to you). Do others feel this, and can you explain why it feels this way.

I’m glad that phrasing the question to avoid personal “triggers” seems to be helpful.

Cheers, Bippy

I think the problem with the hypothetical extremes is that those who propose them try so hard to make sure that there are no alternative options, and (as we’ve seen) get very annoyed if you do propose a solution beyond their two choices.

But there are nearly always, in real life, more than two choices.

Yes, but this is a special Bomb-That-Only Kills-Kittens and anyway, your decision is going to be enforced by a race of super-intelligent infallible aliens.

I agree totally; I can’t think of a plausible scenario where one is faced with only option of ‘Do shitty thing A’ or ‘Do shitty thing B’ - there’s always at least ‘try to find a way not to do any shitty things’.

This is true, but I postulated ‘Do shitty thing A’ or else ‘Shitty thing B will happen’
An example might be…
Good friend A is going to a party, she has bought a new dress specially, and comes round to you to show off the dress. Now the dress in your opinion makes A look terrible. A asks for your oppinion of the dress.
If you lead A to believe the dress was a bad idea, then A feels bad about paying for it …
If you don’t lead A to believe the dress was a bad idea, then A risks making a fool of herself at the party.

Totally non world shattering desision, but no matter how you ‘sugar’ your words to A the result is a choice of two ‘evils’.
Of course, an example could be made using military intelligence, bombs, and Iraqui Hospitals … but let’s keep this mundane.

Cheers, Bippy

Easy; you say “let’s not go to the party; let’s stay at home and shag baby, YEAH!”.

There is always an alternative option.

:), but she so wants to go to the party, and she’s my sister…
You can keep offering alternative solutions, but sometime you just need to bite the bullet and chose what seems to be the best of two (or more) bad solutions.

Is there more virtue in the passive solution (say nothing at let her find out) over thae active solution (tell her the dress doesn’t suit her). In cases where both results seem equaly bad?

My earlier post was taken for hamster fodder, then you guys intervened. So here goes a new try.

I concur with the previous posters that such abstract hypotheticals are not really fit for deriving proper results. “fanciful cases do not make for sound judgement” (Honore, Responsibility and fault, p. 52). This is a common theme in ethics. More specific, the thing you apparently want (re: the OP), to wit (dis)proving a theory that says what you should or shouldn’t do regardless of the consequences (=deontology, duty-based theory) may prove resistant to such unrealistic examples. That is not to say that it is not great fun to debate such examples, only the usefullness of the debate is dubious.

A particular difficulty is in your assumption that it is possible to have two distinct options that are ‘equally’ bad. I cannot see how a practical dilemma may have ‘equally’ bad solutions. They may both be horrific and so in the same ‘order’ of badness, but that is a far cry from being equal in a more mathematical sense. The way you state the problem seems to assume the correctness of a utilitarian scale of good and bad. Whether such a scale can be constructed is rather dubious, and in any case this assumption weighs the scales in favor of the utilitarian theory.

If on the other hand you do not want a solution of such a hypothetical solution but only want to prove that the existence of such problems proves that pure deontology (or for that matter, utilitarianism) is incorrect, you might want to look in ethical dilemma theory. There are a number of books on the subject (such as W. Sinott-Armstrong, Moral Dilemmas, New York: Blackwell 1988); most go back on examples of Bernard Williams (in: Utilitarianism for and against, Cambridge: Cambridge University Press 1973, p. 99) directed against Utilitarianism.

The type of hypothetical problem you want has been studied in abstracted form as the ‘Trolley problem’: see P. Foot (‘The Problem of Abortion and the Doctrine of the Double Effect’, in: Virtues and Vices, Oxford: Blackwell 1978, pp. 19-32, on p. 23): a driver of a runaway tram can only choose between steering to a track on which he will kill five men working, and remaining on a track with only one man.

In case you really want something close to such a dilemma see the Siames Twin case, British Court of Appeals (22 September 2000, case B1/2000/2969, [2000] 4 All E.R. 961) on whether to separate Siamese twins (which would certainly kill the weakest), or refuse to separate, which meant both would die within a few months. In this case too, the situation was in fact more complex, since the parents (for religious reasons, but possibly also because of the allegedly low quality of life of the otherwise remaining invalid child) were opposed to the separation.

Sorry for the length of this post; I have been working on an article about the use of examples in ethical theory and have been debating against precisely such abstract problems.

I will disagree with Mangetout and take my position that hypotheticals are often illustrative of underlying moral principles (should there be any to uncover, of course) without the clutter of expediency that we so often encounter in everyday circumstances which can cause us to find middle-grounds that, while not “perfect” solutions according to an absolute scale (even an absolute scale in a relative system), are considered optimal given all that is under consideration. Which, you see, is why I consider hypotheticals just as peachy, because they dictate what is under consideration. Even in reality we never have all the facts, so the fundmental difference between the two is merely that one is fabricated, not that one cannot be used to make moral judgments.

For a tangential development of the same, reference this:

Which begs the question of the validity of moral categorical judgments in the first place, but I think my demonstration is general enough for other cases.

The “giant squid” reference, however, is a valid point to me, and it should always be the burden of the one producing the hypothetical to demonstrate that such a situation is indeed possible; that is, the hypothetical should be constructed in such a way that (at least to some parties willing to discuss the matter) (1) further information really would be irrelevant, and (2) that the situation described is not already forbidden by the laws of physics, biological constraints, and so on. Any hypothetical which at its core begins with the notion that “if everything was different it wouldn’t be the same” is either a poor hypothetical or an underdeveloped theory (scientific, social, political, mathematical, moral, etc).

To the matter at hand.

Many people, in fact, do not view the consequences of action as the reason for judging them moral or immoral in the first place. Consequentialism—roughly, the doctrine that the consequences of an action must be weighed in order to make a moral judgment—is not the only breed of thought around. As at least one alterative exists, which is categorical judgments; i.e. - the situation is “abstracted” to be a special case of a more general situation in which the conclusion is known, and thus we know the conclusion by tautological implication (these are “really” the same case, so the same decision must apply; reference the quote above). Of course, one might ask, but how do we judge even these categories (rightly hypothetical in themselves) if not by understanding their implications?

Tricky questions in the general case, and quite arguable I’m sure.

This is unanswerable, even assuming 100% certainty as to the causality. “Terrible” is not a clear indicator of what will actually happen, and since “terrible” is, in this case, itself a moral judgment, the question should always be answered by whatever moral system the person uses (intuitively or explicitly). IMO, any moral system which has the possibility of absolute conflict (a no-win situation where all [two or more, including the degenerate case of complete inaction] choices are equally good, bad, or amoral) is fundamentally flawed.

Nice simulpost, erislover. A quick scan of your post prompts me this reply.

The quotes you give and the standpoint you seem to adhere to both assume that it is possible and/or desirable to have a moral system that works exactly like a scientific theorem. The problem is that there is no proof that morals will work like that, and that it is an unproven normative assumption that morals should work like that.

While it is desirable that our moral intuitions are coherent and do not contradict themselves, we do not have the guarantee that they do. When they do, we’ll have to see what gives in each case. You may try to solve all possible conflicts beforehand with a nifty theory, but if that theory gives an intuitively unacceptable result for a specific case, most people would trust their intuitions over your theory. This is not to say that intuitions are impeccable and unchangeable, on the contrary. I just want to say that a theory is not per se better than the primary intuitions that it derives from.

Hypothetical examples are problematic since they may very well give rise to invalid intuitions that are in fact heavily theory-loaded. You may see this quite well in examples in bio-ethics. Our intuitions and principles from daily life do not work very well when we are not dealing with healthy adult persons but instead with foetuses, comatose patients, multiple personality disorders. How can we be sure that our intuitions and principles will give us proper directions here?

Another thing is that most words are already theory-loaded. An example is ‘free will’, which supposes an entity that can be free or not. Why should ‘the will’ exist; maybe there is only the result of a deliberation process.

That’s my straight-from-the-hip response. If your actual opinion is different, erislover you should read the above as directed at a strawman who actually does hold that opinion (as I think your reference does).

If both results are equally bad, then I’d probably opt for the inaction, since then you never really had to do anything that caused the bad thing.

But in your specific example, she did ask you for your opinion. Isn’t not answering going to be rudeness by omission, and therefore bad? Whereas answering honestly is at least that: polite, and honest, which can’t be bad, right?

.
So, what if a slightly worse thing will happen unless you press the button? (hee hee hee…) Is it morally wrong for you to refrain from pressing? Can you be morally required to act in that way?

On rereading erislover’s post I come to the conclusion that my earlier post was premature. It looks as if we are actually fairly close in opinion on this. I do see some validity in hypothetical examples as well and am not a full adherent of intuitionism. I only tried to warn for an overly large belief in ethical systems.

… feeling a little inadequate here :wink:

erislover you say the second point is unanswerable. So I’ll poke further at the ideas I’m interested in, but not to dissuade any other ideas or directions to take this OP.

If given a button X to press, where button X causes terrible thing x to happen, but not pressing X causes terrible thing y to happen. How should someone make a moral decision as to whether or not to press X?

Should the person given this decision chose to press X iff they view thing x to be worse than thing y by their own moral belief system. Or should they always chose to not press X and thus not be the cause of either thing x or thing y, since the thing y happens without the persons intervention, and can be considered to be caused by the forces that put the person into the situation of having this choice?

… Oh and please don’t consider too much the “does my bum look big in this” hyperthetical I added in an earlier post, that was mostly introduced as an example/joke with Mangetout the idea within the OP itself is to get away from any real world definition of the hypothetical situation to avoid the emotional baggage that such questions arrouse.

Cheers, Bippy

Well Bippy, I didn’t mean to scare you off and I guess neither did erislover. Given your concisely worded question, my take on it (based on part of the literature I refered to) would be as follows.

  1. It is in general found to be better not to have caused a terrible thing by a deliberate act, even if that action might have prevented another terrible thing. Actions are more ‘culpable’ than omissions. An example is if you see someone drowning in the water and you don’t save him: that is not found to be very wrong, but if you have thrown him in the water (with the same end result) you would rightly be found guilty of murder.

  2. The thing is that even given the fact that you should not press the button, as a normally developed moral person it would be quite natural and possibly even required for you to feel at least a slight regret over the choice. Necessary choices may still leave cause for regret, it is bad luck (‘moral luck’) if you find yourself in such a quandary. A well-known case is the bombing of a certain village in England during WW II: the government knew it was going to be bombed because they had cracked the Enigma code, but if they acted on the information it would come out they had cracked it. So they decided not to inform the population of the village, resulting in many deaths. In the end that was probably better for the outcome of WW II, but still it is a choice I guess not many people would like to make. Harsh choices like that (which may sometimes lead to actions) are common in global politics and war, then and now, as I’m sure you’re aware of.

More later, but for now:

A fantastic example, TTT, where although the situation happened as a matter of fact, it has the hallmark ethical dilemma that not much information is necessary in order to form an opinion (indeed, if we add too much we possibly are going beyond fact and creating a new hypothetical). That is, we can only add so much to the situation, and in fact (IMO) it can be summarized quite rapidly. What is also interesting is that it forces the consequentialist to weigh near-certain deaths with conjectural ones, and is one conjectural death given the same weight as one certain one? Tricky questions. Still, I love that example. It is concise, and I think safe to say that almost no one would ever be faced with a response that they “feel good” about.

OK, more later…

IMO the simplest answer is: how do they make moral decisions in the first place? This is only marginally less complicated in that it presents a situation where there is only a choice between action and inaction. I’ll side with the others and say in most circumstances most people would probably choose to do nothing, but really many factors could impact this decision.