You make two points here, but one does not follow the other. It is true that it is possible to use statistics in misleading ways to support bad conclusions. It is NOT true that statistical analysis is a bad way to make decisions because of this, however. In fact, rigorous statistical analysis is just about the ONLY way to objectively analyse a decision. Note that this doesn’t mean you just look at an average and call it a day - rigorous statistical analysis includes looking at things like standard deviations, sample size and quality (ie how independent the variables actually are), etc… there’s a lot that goes into it; but any other form of decision making is little better than guesswork.
This board has lost too many of its resident philosophers. Here’s my sophomore grade B recall:
Consequentialism is a highly influential moral framework, but it has multiple flaws. So people turn to deontological systems (eg Kantian ones) or virtue ethics. They are philosophically flawed too, though not as much as certain systems based on religious dogma generally speaking. (Aspects of Buddhism are a possible exception).
Act-utilitarianism was shot down during the 1800s in favor of rule-utilitarianism, most vividly because act utilitarianism doesn’t permit you to make commitments for the future. So you use consequentialist criteria to set the rules of society, with the understanding that you’ll only go all act-utilitarian if things get really hairy. A man’s got to know his limitations. Cite: JCC Smart: https://www.utilitarianism.com/utilitarianism-for-and-against.pdf See also Eastwood (1973).
One of consequentialism’s many faults is the boundless moral demands it places on the individual: whatever a middle class person donates to the poor, utilitarianism demands that he donate more (in practice). There’s a need for a more lenient framework. Rawls (1973) provides one, and even that is far more demanding than any real life system delivers.
Another fault is examined on page 62 of the pdf:
It is not difficult to show that utilitarianism could, in certain exceptional circumstances, have some very horrible consequences. In a very lucid and concise discussion note,3 H. J. McCloskey has considered such a case. Suppose that the sheriff of a small town can prevent serious riots (in which hundreds of people will be killed) only by ‘framing’ and executing (as a scapegoat) an innocent man. In actual cases of this sort the utilitarian will usually be able to agree with our normal moral feelings about such matters. He will be able to point out that there would be some possibility of the sheriff’s dishonesty being found out, with consequent weakening of confidence and respect for law and order in the community, the consequences of which would be far worse even than the painful deaths of hundreds of citizens. But as McCloskey is ready to point out, the case can be presented in such a way that these objections do not apply… Someone like McCloskey can always strengthen his story to the point that we would just have to admit that if utilitarianism is correct, then the sheriff must frame the innocent man. (McCloskey also has cogently argued that similar objectionable consequences are also implied by rule-utilitarianism. That is, an unjust system of punishment might be more useful than a just one. Hence even if rule-utilitarianism can clearly be distinguished from act utilitarianism, a utilitarian will not be able to avoid offensive consequences of his theory by retreating from the ‘act’ form to the ‘rule’ form.)
The point isn’t that there’s a slam dunk case against framing an innocent man. It’s that utilitarianism has very little to say about the tradeoffs between bad outcome and human rights or even personal integrity. Making tradeoffs is essential to moral practice. But limiting your moral imagination to a single criteria - happiness in the case of utilitarianism - necessarily has a distorting effect.
So what do I argue for? Consequentialism, with a particular fondness for negative utilitarianism, the minimization of suffering. Notwithstanding the above. Popper (1945), in a footnote argues that maximizing happiness is all fine and well, but suffering is potentially more measurable and somehow more compelling as a metric.
Late-late response, but this is spot-on. Most conflicts are where both sides know perfectly well what’s at stake, it’s just that both sides want to out-muscle the other. Sure, there are times when one side denies what’s happening (climate change), but usually it’s a plain old tug-of-war and there is no point in giving anything because giving = loss and taking = gain.
I don’t follow. If deprivation of human rights (i.e. summary execution) is suffering your flavor of utilitarianism should require a comparison between deprivation of one individual’s human rights and the prevention of a riot. This is a variation of the classic trolley problem - one life versus one hundred. The trade-off is direct.
What I don’t like is that the sheriff doesn’t care about abuse of his office simply because he can get away with it. I pointed this out upthread - with consequentialism, if you remove the consequence, there is no longer a distinction between right and wrong. I find that unacceptable.
I would distinguish between direct and indirect consequences. If the sheriff were caught executing an innocent man, that’s a direct consequence of his action. Nobody else has responsibility. But if the sheriff does not execute the innocent man, and a riot ensues, the riot is an indirect responsibility. Other individuals are directly responsible for rioting. People are pretty good at predicting direct consequences, but not so good at predicting indirect consequences.
Mercy killings and suicide, under dire enough circumstances, come to mind as justifiable acts under utilitarianism, but not deontology. You don’t even need to imagine a Donner party situation; in a moment of weakness, an individual can simply fail to consider how badly other people will be hurt when they find the bodies.
~Max
That’s not true, because there ARE consequences. Not to the sheriff, but if the only consequences you concern yourself with are your own, you are not moral.
If there were no consequences to ANYONE involved - say, the sheriff could appease the bloodthirsty crowd by pretending to punish someone but not actually doing so, and he had good reason to believe that rather than encouraging the crowd towards further bloodthirstiness it would cause them to pause and reflect on their actions - then there would truly be no negative consequences to the action.
Of course, in the real world, “will this encourage negative behavior in the future” is one of the consequences we have to look at, so it’s unlikely that such a situation would ever arise. But in a hypothetical world?
Not for abuse of his office, there aren’t. That’s written into the hypothetical, so it can be a direct trade-off between framing & killing an innocent man and a violent riot where hundreds die.
~Max
That sounds like a problem with deontology, though.
The innocent man’s death is a consequence. Again, consequentialism is concerned with outcomes for everyone, not just the individual making the decision.
My point is that the fact that the sheriff is abusing his office is of no consequence in the moral analysis. That should be the determinitive factor for me, personally. It’s not about individual versus collective or overall consequence. The author recognized that if the sheriff was caught, it could undermine the system of justice. Then they explicitly guarded against that by saying it wouldn’t happen.
~Max
The moral analysis requires that you take all consequences into account, such as the innocent person being killed, the damage to the legal system, etc.
If abuse of power is bad, explain why. Does it erode faith in the institution? Does it make it more likely that this sheriff will abuse power in the future now that the mental barriers against doing so are weakened? Does it lead to the death of an innocent?
If you can’t come up with a reason why abuse of power is bad, maybe it isn’t. Or maybe you just need to think harder.
And the hypothetical is specifically written so that there is no possibility of damage to the legal system.
I am not a consequentialist. Abuse of office is wrong because it violates a promise. It is generally wrong to violate promises. It’s irresponsible. The power of the office of sheriff is conditioned on a promise to, among other things, not intentionally execute innocent men. If it is understood that the sheriff is free to execute innocent men when he personally thinks it is justified, there wouldn’t be a sheriff in the first place.
~Max
I disagree; violating a promise is wrong when it causes harm to do so. It can also be right, when meeping a promise would cause harm.
Example: if Private Conscriptovich decides that storming Bakhmut in Putin’s name for the umpteenth time is foolish and deserts, he has violated his oath to the Russian army. Did he do anything wrong?
Of course. He would have to decide whether it’s more wrong to break the promise than it is to keep it and perpetrate violence.
But that’s not a good analogy, is it? In the sheriff example, there’s no consequence for breaking the promise. The sheriff has to decide whether to break the promise and perpetrate violence than to keep it and fail to prevent others from perpetrating violence.
~Max
Right, that’s why I only lean towards deontology, I don’t embrace it to the exclusion of other philosophies. I was trying to defend it from my impure layman’s perspective, using everyday speech, not make a strict logical argument like a philosophy professor, against a single, well defined moral system. It’s more a defense of deontology having a place in your moral thinking, rather than it being the only moral system in play.
My point in that quote was that, while statistics, being a major part of science and our understanding of the world, can and should inform our decisions, in practice, everyday people making decisions in the moment are usually working from a set of simple rules, rather than doing complex studies and calculations themselves.
Ideally those rules are derived at least in part from scientifically informed statistical analysis of reality and natural phenomena. But the messy science is usually distilled down into something closer to clear rules for practical daily usage by non-scientists. And scientists too, most likely, at least outside of the laboratory. But I’m not one so I can’t say for sure.
You seem to have a decent grasp of the general issues. But you could usefully substitute in “one individual’s suffering”, above to sharpen the issues. Williams’ problem with utilitarianism lies in its narrow scope. But he also thinks that consequentialism’s scope in general is too narrow.
I attempted above to summarize Williams’ view from memory. For a sharper take, start with last paragraph p 71-72 of the upthread pdf, which introduces 2 new thought experiments. Then read p. 85-86 for what I hope is the core of his argument. Excerpt:
Nor is it just a question of the rightness or obviousness of these answers. It is also a question of what sort of considerations come into finding the answer. A feature of utilitarianism is that it cuts out a kind of consideration… a consideration involving the idea, as we might first and very simply put it, that each of us is specially responsible for what he does, rather than for what other people do.
Underline and bolding added, italics in original. Then buy the 1973 book: it’s still in print. Or to save time, click the next link for a better summary than mine.
I have struggled with Williams’ argument as well and maintain my consequentialist leanings. But the OP asked whether there was any rational basis for deontological alternatives. I was going to say that there is, if you consider acclaimed philosopher Bernard Williams to be sufficiently rational, which I do. But I see from here that Williams rejects Kantian deontology as well, in favor of what became virtue ethics. Arg. Nonetheless, I doubt whether Williams would have gone to far to say as that Kantianism lacks any rational basis. That would be a hard extreme to defend
And how would he evaluate the “wrongness” of the two options, other than by comparing outcomes?
Then use another example: should an SS officer break his promise to serve the Third Reich and obey Hitler by saving some Jewish people?
If you promise to do something immoral, then keeping that promise is immoral. The promise itself isn’t really relevant here.
He would wing it.
If you take the pure act-utilitarian position, then keeping a promise has zero intrinsic value, and if you make an agreement it counts for nothing if circumstances change. Rule utilitarians back off of that by noting that a society that keeps agreements has higher utility than ones that don’t and furthermore encourages its citizenry to internalize that ethic. Based on post 34, I’m wondering whether DrCube is in fact a rule-utilitarian.
(20th century philosophers noted that the line between act and rule utilitarianism is blurrier than it appears at first glance.)
Utilitarianism For and Against (1973) didn’t address the trolley problem as far as I can tell. Pure consequentialism denies that there’s a problem at all: if there’s a train headed for 3 people tied to the tracks and you are on a bridge standing next to a fat guy whose weight could stop the train, give him a push. Because 1 death < 3 deaths. There’s no controversy: kill the guy. Easy peasy.
But if admit that there’s something wrong with killing somebody else to save 3 people, in other words something that must be weighed as part of your decision, then you are not a pure consequentialist.
The problem with the trolley problem (one of many problems, but I think the central one) is it attempts to take morality out of reality. It also ignores the process by which decisions are made by encouraging people to “solve” the problem in a vacuum.
But the closer you get to injecting real world nuance into something like the trolley problem, the less of a “gotcha” it becomes for utilitarianism.
False. A consequentialist—a rule utilitarian if you prefer—can easily justify declining to kill 1 to save 3 if, for example, extrapolating that kind of moral decision making process for application to humanity as a whole would be bad for human well-being. Just because you look at the implications beyond the event in a vacuum doesn’t mean you’re not a pure consequentialist.
This is a good summation of the problem I have with the trolley problem thought experiment.
In the world of the trolley problem, in a vacuum, you probably SHOULD choose to throw the lever and kill the one guy to stop five others from dying. Choosing not to do that places your own sense of morality above five real lives, resulting in a net increase of four deaths. In other words, in the strict hypithetical of the trolley problem, I think that choosing to do nothing is equivalent to murdering four people, morally speaking.
But real life is NOT a vacuum. In real life situations that the trolley problem tries to model, all the certainty that makes the sacrifice easily weighed an analyzed evaporates.
Agreed. JS Mill’s writing on utilitarianism is replete with caveats that enable one to steer us back to Victorian morality. Utilitarians get an opt-out by positing (but not precisely measuring) indirect effects that work them out of certain moral quandaries. You can even do this within pure act-utilitarianism.
But there’s a whole category of considerations that is left out. Among these are according to B Williams, " the idea, as we might first and very simply put it, that each of us is specially responsible for what he does, rather than for what other people do." To that utilitarians assign either a weight of zero or they take shelter in indirect effects. In other words such ideas of personal integrity become purely extrinsic goods (all of their value being indirect) rather than intrinsic ones.
Reasonable people possess different views on this. I lean towards utilitarianism: if my actions makes someone else’s bad behavior by more likely, that’s partly on me, or so I believe. Others vehemently disagree with this take of mine. That’s ok. It’s just that I get nervous when zero weights are attached to certain intrinsic qualities (like keeping your agreement) that I deem not unreasonable (even if some weight -even substantial weight- is permitted due to indirect effects). And yes, JS Mill’s On Utilitarianism does discuss breach of promise – he was aware of the issue. I’ll quote
The important rank, among human evils and wrongs, of the disappointment of expectation, is shown in the fact that it constitutes the principal criminality of two such highly immoral acts as a breach of friendship and a breach of promise. Few hurts which human beings can sustain are greater, and none wound more, than when that on which they habitually and with full assurance relied, fails them in the hour of need; and few wrongs are greater than this mere withholding of good; none excite more resentment, either in the person suffering, or in a sympathizing spectator.
Mill puts a lot of stock in moral sentiments and moral intuitions. He thinks that utilitarianism properly rationalizes and codifies these sentiments, but I don’t think he goes as far to say that utilitarianism replaces them entirely. I think that’s healthy. I’m more optimistic about the scope for moral systems that Bernard Williams was, but I do believe they should be applied ecumenically, at least until all conceptual holes are plugged.
Babale:
That said, while trolley problems exist in the real world, most moral challenges don’t involve such quandaries. I recommend this discussion by David Roberts of one real life application of utilitarianism. In practice, the remoteness and uncertainty of distant benefits justifies deweighting them in favor of moral rules of thumb, which necessarily have a deontilogical flavor:
That’s the part I found frustrating, because to me the problem with SBF’s calculations were less moral than epistemic. It’s less about the substance of this particular equation than the general practice of trying to reason through large, complex systems over long time periods.
Two things:
- Humans have radically limited information & have consistently proven AWFUL at predicting the future.
- Humans are very, very, very, very good at bullshitting themselves, ie, “motivated reasoning” that leads to conclusions congenial to one’s priors.
It follows that the bigger & more complex the systems you’re reasoning about, and the farther out into the future your reasoning extends, the more likely you are to be wrong, & not just wrong, but wrong in ways that flatter your priors & identity.
Ergo:
when your reasoning has put you in a Bahamas villa, out of your head on drugs, scamming people out of money … you should take it as a near-certainty that you have bullshit yourself somewhere along the way. “Rationality demands I indulge my basest desires.” Probably not!
https://twitter.com/drvolts/status/1594462688096419841?s=20
I add emphasis:
In other words, thanks to our epistemic limitations, a “dumb” heuristic that just says “when in doubt, be decent” will probably generate more long-term utility than a bunch of fancy math-like expected-value calculations. We want resilient ethics, not optimized ethics.
The trolley problem is interesting, but it’s also tangential to the bulk of our moral challenges.