Why morality can't be judged solely by consequences

In another thread (the one on buying a ticket for one movie and deliberately seeing another), Priceguy made the comment, “I’m a pure consequentalist.” To which I replied, “I’m not. I think there are serious philosophical problems with judging the morality of an action solely on the consequences that result.”

At Priceguy’s request, I am here starting a thread to back up my statement and open debate on this more general issue.

First, let me be very clear that I am not arguing that consequences are not a very important factor in determining whether an act is moral, just that they are not the only factor.

One important point is that we cannot know all the consequences of the choices we make, especially when you consider indirect as well as direct consequences, long-term as well as short-term consequences, and the consequences that arise, not solely from the choices we make, but with how those choices interact with other people’s choices and other factors we couldn’t have predicted. This is a big problem with the idea that the morality of an action can be judged solely by its consequences, in a practical sense, though not with the idea that it should be judged by its consequences in a theoretical sense.

A major objection to pure consequentalism is that it leaves no room for ideals such as justice or human rights; that nothing is right or wrong in and of itself. There would be nothing wrong with punishing, torturing, or putting to death an innocent man in order to gain some positive result. I would be right to lie, cheat, steal, and manipulate whenever I thought the results of doing so would be neutral-to-positive. There would be no reason to walk away from Omelas.

My other main objection to pure consequentalism is that, when you make a choice or take an action, the effects of doing so are not just what happens in the real world as a result, but the effect it has, often gradual and invisible, on the character of you as a person and on society as a whole. Every time I break a promise or tell a lie, for example, I become more cynical, more jaded, more dishonest, more used to and comfortable with and expectant of, the notion of lying and breaking promises. I lose respect for truth and honesty and integrity. And the society I live in inches one step further away from one in which honesty and fairness are respected.

Your last paragraph is basically what I would say.

I defy a person to extrapolate all consequences of moral actions. Generally a consequentalist justification for an action is based on a specific consequence(s) that is clear and apparent, while brushing off the idea that consequences of people being morally free to commit said actions can lead to trouble down the road.

I am not claiming that we can always see every consequence of every action. I am claiming that we should try, and that consequences are the only thing that matters when assessing the morality of an action. I am not saying that consequentalism is easy. It’s not.

I am also of the opinion that laws, for example, necessarily have to be rule-based. Even though there are times I would feel a killing is morally right, I still think the laws should forbid killing, because I believe having such a law has better consequences than not having such a law. If I didn’t, I’d argue for murder laws to be stricken.

Exactly. I am arguing that it should, not that it perfectly can. Therefore I have no problem with this paragraph.

Agreed. I see little value in ideals, and ascribe nothing value in and of itself, except for pleasure and pain* (positive value and negative value, respectively).

In a theoretical setting, I would lie, cheat, steal, manipulate, torture, rape and kill to achieve a benefit greater than the harm I cause. In a theoretical setting, I would live in Omelas and see no problem with it. In the real world, it is better, in general, not to do those things, because not doing them has better consequences.

That is also a consequence, so I don’t see how this is an argument against consequentalism.

*When I say “pleasure”, I mean whatever someone experiences as pleasant, and when I say “pain”, I mean whatever someone experiences as unpleasant. So, something that’s pain to one person might be pleasure to another, as with masochism.

Assessment by consequence is a valid approach to evaluating the credibility of the applied ethics of an action; the problem is that consequence is by definition a post hoc evaluation of the rightness of an action. You can attempt a prediction of the consequences of an action, but as the o.p. notes, any action sufficiently complex to involve more than one other party, or has ramifications beyond the immediate consequences, is too vast in potential bifurcations to make a perfect prediction.

Assessment by consequence also leads to the other sticky ethical problems of ethical relativism and the comparison of unlike consequences. That is, you may view the consequences of an act as being harmless (i.e. stealing a pack of gum from a busy newsstand) but the owner may have a different view of harms (especially if many people do this.) This disconnect in common basic ethical stances gives rise to (ethically) irresolvable conflicts between parties which means that a third party must intervene to enforce a (presumably arbitrary) standard of behavior which may or may not be in concordance with one or both parties. This is essentially an abrogation of the concept of a “social contract” which underlies a free society.

Comparison of unlike consequences is even worse; if an action results in bad consequences either way, who is to say which is better? The burglar would argue that his act of theft does not justify the use of lethal force to inhibit or interfere; the home or business owner might feel, on the other hand, that the consequence of losing valuable property purchased with the labor of unrecoverable hours of one’s life or the apparent threat posed by an intruder fully justifies the consequences of the threat and application of lethal force. Who decides what consequence is most appropriate, and on what non-arbitrary, non-biased basis?

Assessment by consequences does not offer a unique and invariant system for assessing the potential ethics of a situation; it is a recursive syllogism with the minor condition stating that the consequence is ethical if it is not harmful (by the reasoner’s evaluation), and then concluding that the action was ethical. Of course, any real world application of a system of invariant ethics will always have extreme conditionals or exceptions; a doctor may elect to perform an operation which may possibly kill the patient in order to treat a life-reducing ailment. But “consequentialism” lacks any firm basis for evaluation.

In the context of the original argument (specifically, the purchase of one movie ticket to enter the theater to see another) it is asserted by some that the consequences make the action ethically permissible by way of satisfying the agreement between the cinema and the purchaser to pay a certain amount in order to enter the cinema, and that the cinema sees no consequences from the purchaser’s subsequence action to view another film. Aside from the fact that the contract is, in fact, not simply to enter the cinema and see any film but rather to see that specific film at a specific showing time (making this clearly a contractual violation) the purchaser is also unaware of the specifics of the agreement between the distributors and the consequences should the distributor discover that they are not receiving due revenues for viewings of the film, and therefore incapable of addressing the consequences in a holistic manner. (This is not purely a theoretical exercise; distributors sometimes send representatives to count the number of people entering a viewing and compare this to stated revenues; a cinema that has too much variance may experience penalties as a result.)

Of course, the reality is that cinemas don’t make their vig from ticket revenues (which at best offset the cost of showing the film) but from concessions, and so as long as you buy your popcorn and soda they don’t really care much which film you see. Distributors factor in a certain (small) proportion of losses, and don’t really care (much) if a few kids watch Star Wars several times in a row. In general, the harmful consequences from this are typically low, which is why little or no effort is made to prevent it except in limited circumstances; the cost of trying to do so is less than the savings from preventing this behavior. This cost vs. benefit analysis is hardly the basis for an ethical standard; it is pure fiscal pragmatism, an acceptance that some people will not be ethical, but that the costs can be absorbed.

Someone wishing to make an extreme ethical position can invoke hedonistic or even solipsistic consequentialism, i.e. I don’t care what the consequences are for someone else, or it doesn’t matter what the consequences are for anyone external to me. That is a perfectly valid ethical argument to make, provided that you are comfortable with being a sociopathic asshole straight out of a Neil LaBute play. Such a view does not coordinate with societal ethics, though; that is, if your personal philosophy is that other people do not matter, then you have not basis for indignant when other parties present you with the same argument.

Morality, as contrasted with the philosophy of ethics, is a somewhat different and more nebulous concept in which often arbitrary (or seemingly arbitrary) normatives are imposed by an outside authority, with no direct basis in consequence external to that authority. An act might be perfectly ethical under any rational circumstance (i.e. protected sexual intercourse between two consenting adults) but in violation of some applied moral code for reasons beyond consequence or rationality.

Stranger

I would say that in making moral decisions we should factor in the fact that we can’t always reliably predict the consequences of our actions.

In particular, I would say that it is better to have an absolute rule saying “I won’t commit torture” rather than saying “I would only commit torture if I thought the consequences of not torturing were worse than torture itself.”

In defending torture, people always say “What if a terrorist knew the location of a nuclear weapon, and you had to extract the information right away or a city would be destroyed?” Of course, that is an unrealistic scenario. More realistically, the government would have someone in custody who might be a terrorist, who might have crucial information, which you might be able to extract via torture. Or maybe it’s a case of mistaken identity and you’re torturing an innocent man. The possibility of a person being wrongly accused is, after all, why we have trials.

So, what’s wrong with saying “Well, I’ll only torture when I’m really, really sure”? The problem is that this leads to a slippery slope sort of thinking where you can say, “If I’d torture in that situation, why not torture in this slightly less dire situation”, etc. As evidence of this point, consider how often such extreme and implausible examples are raised in defending torture in more realistic situations, like the documented cases of waterboarding conducted by the U.S. government (none of which, so far as I know, was prompted by the threat of an imminent nuclear attack.)

Ultimately, maybe this is a consequentialist argument after all: I’m saying the consequences of doing away with hard line rules like “Don’t torture” are worse than the consequences of having those rules. So maybe I’m saying consequentialism is good in a big-picture sense, in deciding when to have strict moral rules and when to judge a particular choice solely on a prediction of the outcome – but in general some ironclad moral rules are still necessary.

Agreed.

Some questions: Why privilege pleasure and pain rather than some other concepts (learning, justice, etc)?

How do we determine the morality of past events, given that some events could have significant and long-lasting consequences. For example, was the killing of Cesar moral?

My impression of happiness research is that, aside from death and the eminent threat of death, very few events in life have a significant effect on your happiness (which I suppose is some aggregate of pleasure and pain). How would consequentialist morality deal with the possibility that levels of pleasure and pain are largely genetically determined?

But you see the distinction I’m drawing, right?

I’m not saying “Given any particular choice you should try to predict the consequences of your actions and choose the action with the best consequences.”

I’m saying that you should have at least some moral rules that you always follow, even if the specific situation seems to justify violating them.

The reason is that you can’t completely trust yourself to make an unbiased estimate of whether the consequences justify a particular action. Doing the morally wrong thing is often very tempting, and this can skew your judgement.

Using the example of torture:
If it’s unlikely that you’ll face a situation where torture is justified, but likely that you’ll face a situation where you could talk yourself into thinking it’s justified, then it’s better to decide “I will never torture even if I feel the situation justifies it.”

So, I’m essentially saying the best system of making moral decisions is one most likely to produce the best consequences – but, that system may include inviolable rules as well as evaluation of the specific consequences of ones actions.

Is this the sort of consequentialism you’re advocating? Because it seems different than saying

Unless by “in a theoretical setting”, you mean “in a world where people could be trusted to predict the consequences of their actions reliably on a case by case basis.”

Of course, everthing I’ve said above presupposes we could agree on how to define the “best consequences”, which is obviously a very non-trivial thing in and of itself.

In response to Dr. Love, I might speculate that it’s less about maximizing happiness than about satisfying people’s desires. Suppose one found that slaves are on the whole as happy as free people. (I don’t remotely believe this – I’m just speaking hypothetically.) Nevertheless, the fact that most people want to be free would mean that we should allow people to be free. What makes us judge slavery as immoral is our ability to empathize and say “I wouldn’t want to be a slave,” not any knowledge of what our level of happiness would actually be as slaves.

I see our definitions of “moral” as evolving to satisfy people’s desires, whether or not these desires are rational from the standpoint of maximizing happiness.

Well, it’s only a problem for someone who doesn’t believe in an omnipotent judger. So i’d say it’s more a problem with the idea that we can judge the morality of an action.

Actually, I believe this is so, and in fact I would say that the entire human race behaves this way; they do things in order to gain what they perceive will be the best set of consequences. I think you’ve overlooked the fact that people do in fact value justice, human rights and so on; the people who walk away from Omelas may value their ability to live without condemning a child more than that fantastic society. Pure consequentialism is how we all live, and certainly doesn’t mean anarchy.

Again, I would say that you’re missing out on the idea that those things could be valued and indeed be taken into account when choosing to do or not do something. You or I don’t lie precisely because we take all those things into account; what makes you think consequentialism takes into account only (to simplify things) practical concerns? If more intangible things are valued, then certainly those intangible things will play a part in a consequentialist’s mind.

Tim314’s list of clarifications is a nice and tidy summary:

>I’m not saying “Given any particular choice you should try to predict the consequences of your actions and choose the action with the best consequences.”

This is very clear. I disagree with it, because I think we are always obligated to try to achieve the best consequences. But it’s easy to be sympathetic to this point, as the next point helps explain…

>I’m saying that you should have at least some moral rules that you always follow, even if the specific situation seems to justify violating them.

This is debatable on the basis of how well you can trust your conclusions about whether violating your prior rules is justified.

>The reason is that you can’t completely trust yourself to make an unbiased estimate of whether the consequences justify a particular action. Doing the morally wrong thing is often very tempting, and this can skew your judgement.

And here’s the reason to be mistrustful, or at least doubtful or suspiscious about it. It’s a slippery slope argument, right? And what pulls you down that slope is the temptation of some conflicting interest.

Isn’t it better to fear your inability to tell whether better or worse consequences will follow, than to think you should prefer rules over better consequences sometimes? That is, if you could know with certainty whether better consequences come from one choice or another, you would not be taking this argument at all, right?

And it’s interesting to use “justify”, as in “the specific situation seems to justify violating them”. To justify can often mean to come up with an excuse, a rationalization, for something actually done for other reasons.

A clean example: you happen to learn about somebody doing some sort of wrong. Suppose you feel some kind of cheap temptation to rat them out, to gossip or otherwise break the secrecy. And suppose you further have a clear rational understanding that by doing so you would prevent some real harm, you would create better consequences. Does the fact that you are tempted to blab for the wrong reasons change the obligation you have to do so for the other less exciting reasons? It shouldn’t, though of course you should be on high alert, trying to see if you are actually fooling yourself about the choice.

In the real world there is such a thing as intent, and people’s intentions can be immoral.

Two (mostly rhetorical) questions to illustrate this-

  1. Is attempted murder immoral?

and, since that one can so easily be dodged by citing physical injury/duress or psychological trauma on part of the prospective ‘victim’, along with a host of backpedalling rationalizations regarding motive-

  1. A man is walking down the road, call him Walker. A second man is in a field with a rifle, call him Shooter. These two men have never met; neither has cause to harm the other. Shooter lines up Walker in the scope of his rifle, intending to kill him, for the the thrill; to effect action at distance; to be God. He squeezes the trigger-the rifle misfires.

Walker is completely unaware even of Shooter’s existence; he’s been influenced in no way.

Shooter acted with malice, yet harmed no one. Has he committed an immoral act?
If you say ‘yes’, then you’ve admitted an action can be immoral without effecting harmful consequences; If this scenario breaks your ‘rule’, then certainly there are others.

But the intent was one of inflicting bad consequences. And intent obviously matters, otherwise you can’t distinguish between accidents, immoral acts and acts of nature.

Like all moral systems, it comes down to opinion. I can’t force anyone to agree that pleasure is good. It’s just that I’ve never met anyone who doesn’t. Everyone likes pleasure. Everyone dislikes pain. Almost by definition. If someone were to break this pattern, then they would have no reason to be consequentalist - I doubt they could see why anyone would. Pleasure and pain are identifiable. Justice, to use your example, is extremely vague. You and I don’t agree what justice is, but I’ll bet we agree what pleasure is, even if we derive pleasure from different things.

Interesting. Cite?

I would have to think about that, but I’ll wait until it seems to be the case.

I can’t completely trust myself to form strict moral rules either. Both case-to-case reasoning and strict rules have the same source: me. Why trust one more than the other?

It is not likely that I’ll manage to talk myself into using torture. It’s extremely unlikely that I’ll end up in a situation where I’d even have to consider using torture.

If it’s the case of slaves being as happy as free people, I agree. If it’s the case of slaves being more happy than free people, I don’t.

As it turns out, the act was morally neutral (or damn close; I’m not up on the consequences of a rifle misfiring). Shooter still shouldn’t have done it. Why? Because N times out of N+1 where N is a big number, the act would have turned out to be morally bad. While deciding whether to commit the act, a consequentalist would reach the conclusion that he shouldn’t.

This is what I was driving at, in response to this statement-

Which coincidentally doesn’t jive with this statement-

consequence |ˈkänsikwəns; -ˌkwens|
noun
1 a result or effect of an action or condition

Considering results or effects of an action requires that that action be in the past. It appears that Priceguy is talking about considering likely or probable consequences as well as actual consequences when judging the morality of an act, and that I have wrongheadedly taken him literally. The first quote of his in this post needs to be edited to more accurately reflect his views, and I apologize if I’ve derailed the thread on a pedant’s objection.

I don’t have time to respond to such an altered statement-and have my doubts as to whether I’d ‘need’ to or not.

No. Like I said, as it turned out, Shooter’s act was (for the purpose of this discussion) morally neutral. When I’m judging the morality of the act afterwards, I’m looking at actual consequences, exactly as I said.

When making a decision whether to commit an act, you look at likely consequences, but the actual consequences aren’t completely known until afterwards, and usually not even then. So there’s no contradiction.

In the case of Shooter, negative consequences are so likely that it’s a no-brainer. Pure dumb luck turned his act neutral instead of bad.

I don’t care about intent. If I save a child’s life that is generally speaking a good act, whether I do it to prevent parental grief or because I believe saving the child will earn me brownie points with the Flying Spaghetti Monster.

The contradiction seems to be that one can act immorally without committing an immoral act.

Also I submit that prescribing ‘should/should not’, in the context of most human interactions, represents a value judgement of ‘moral/immoral’.

IME, by stating that Shooter shouldn’t have acted as he did, you’re pronouncing the act immoral beforehand, then reversing this judgement when his act fails through dumb luck.

So, in your estimation, if a neutral 3rd party witnessed the hypothetical situation, should he report Shooter to the authorities? (Authorities in this case defined as completely neutral personages who investigate and prosecute wrongdoings-i.e. immoral acts.)

One can, but it’s a matter of semantics, something I find quite uninteresting.

Of course. Morality is a value judgment.

I’m saying the act will probably be morally bad and therefore shouldn’t be committed, but as it happened it was morally neutral. Again, semantics.

Yes. He’s fucking dangerous, and he committed a crime.

Well, I would argue that it’s often easier to make correct moral choices in the abstract than when faced with a situation where we’re tempted to do wrong.

For instance, it’s not hard to say “I should be an honest person and tell the truth.” But when I make a mistake at work that I don’t want my boss to find out about, then suddenly I have a strong incentive not to tell the truth. In that case, it’s much easier to grasp at any justification for lying and convince myself that lying is the right thing to do, when really I’m just doing it to keep myself out of trouble. (E.g. “Oh, this mistake probably will resolve itself, and telling my boss now would just stress him out needlessly”, etc.) But if in advance I’ve committed to the rule that I will be honest and tell the truth, then it’s harder for me to weasel out of it with faulty justifications.

Of course, that’s not a perfect example, because I think there are some situations where lying is the right thing to do. But I think you can work those into your rules to some degree, like “I won’t lie unless it’s about a trivial matter and I’m trying to spare someone’s feelings” (a white lie), or “I won’t lie unless there’s some substantial greater good at stake like sparing someone from physical harm” (e.g., seeking help for a suicidal friend, even if you promised not to tell).

I believe you when you say it’s unlikely you would talk yourself into torture, but I don’t personally think this is true of everyone in a position to torture people. In my opinion some people of authority in this country have talked themselves into using torture. (I hesitate to say too much on that point, because I don’t want to hijack the thread with a debate over what constitutes torture.) Of course for most people the choice never comes up, so where they stand on the morality of torture is less important (except perhaps in determining who they elect to offices with the power to authorize torture.)

I wouldn’t go so far as to call the act “morally neutral,” but I agree with the point that we have to consider how the act was likely to turn out, not just how it did turn out.

Yes, if you knew for certain what action would have better consequences, then I wouldn’t make this argument. I’d say “You know what the right thing to do is, so just do it.”

But in practice, you don’t know, and you often have a bias towards doing the “wrong” thing because that is the action that most benefits you. So you can’t even really trust yourself to make a fair assessment.

I’m saying that given human fallibility and susceptibility to temptation, following rules instead of trying to achieve the best consequence on a case-by-case basis will (I suspect) actually achieve better consequences on the whole than simply trying to do the best thing in each case. Yes, in principle this means there are some situations where you’re preferring rules over better consequences, but if the rules are sufficiently nuanced then these situations are few and far between, whereas the situations where without rules you might make the wrong decision because of your personal biases are more common.

In particular: lying. Most things that we consider “bad” are bad because they hurt someone, but I’ve often asked myself “Why is lying bad?” If a man cheats on his wife, but he can avoid her ever finding out then she never suffers any emotional harm. And so long as he doesn’t get an STD she isn’t going to suffer any physical harm. So what’s wrong with cheating? It’s bad because it’s essentially a lie, and lies are bad because they undermine people’s ability to trust each other, to the overall detriment of our society. But again, only if you’re found out. And people who lie typically do so expecting they won’t be found out.

This illustrates the problem with judging each situation on its consequences. The liar thinks he won’t be caught in the lie, and thus no one will be hurt. So morally he thinks he’s OK. But quite often he’s caught after all, and people are hurt. The problem is, people talk themselves into thinking they’re smart enough to get away with lying, when they aren’t. It would be better if they just said “I won’t lie,” (again, with reasonable exceptions like I mention in post #19), and never gave themselves the chance to talk themselves into it.