Utilitarian standard

An ac-utilitarian standard would allow one to say, steal something so as long as the net happiness was increased by that act. Can anyone explain how a rule-utilitarian standard might avoid licensing a theft in this way. Does anyone find the rule-utilitarian standard an acceptable guide to moral decision making?

Rule utilitarianism would hold that “well-intentioned theft” would itself lead to lesser utility in the long run, and is an altogether more rounded and complete a formulation than simple act utilitarianism. Rule utilitarianism accepts that we can never really know whether specific illegal acts do increase utility overall.

  1. We should all live by a moral code. To choose among possible rules, we should opt for those that tend to lead to greatest utility. This is the rule utilitarian position, as I understand it. Rules that forbid stealing clearly have utility: the alternative, I suppose, would permit physical force and guile to regulate the distribution of resources.

  2. If I understand Professor Smart correctly, utilitarians (who generally resemble rule-utilitarians more than act-utilitarians) generally allow an opt-out clause for a given rule under extreme circumstances. Nonetheless, the moral agent is suppose to feel bad about breaking the rule.

  3. But why adopt rule-utilitarianism? It is difficult to trust pure act-utilitarians to keep their promises, since doing so may violate their ethics if the (global) gains from cheating momentarily exceed the losses of doing so.

  4. #3 is debatable and debated.

  5. I seem to recall some argument that maintained that act and rule utilitarianism collapse into one another, since one can always imagine a sufficiently detailed set of rules that would cover every conceivable action. But my memory is fuzzy and I think I’ve botched this line of thought somewhat.


(I confess that I don’t plan to spend too much time on this matter, until the American election and post-election is more or less over.)

I don’t have a whole lot to add to this right now (too tired, and my memory of consequentialist theory is a bit hazy), but there are a few points in light of Measure for Measure’s post I’d like to make.

Firstly, it is true to say that utilitarians will most definately resemble rule-utilitarians because of the great difficulties of having to accept what we might want to call terrible acts as long as the net value of the act is greater than the net disvalue. I just wanted to make a point about the agent feeling bad about breaking the rule. This gets you much closer to a virtue ethical standpoint. Feelings or emotions of the agent do not really have much place in a consequentialist theory. It is meant to be much more ‘scientific’ - a cost-benefit analysis of value over disvalue. If the agent had to break the rule, they shouldn’t feel bad about it, they just realized that it cannot apply to some particular case, and consequently were ‘forced’ to break it because it would be wrong to follow a rule due to the large amounts of disvalue it would create.

Can you really conceive a set of rules sufficiently detailed to cover every possible action? This is actually a rather interesting topic, and gets into moral particularism - but that can get pretty complicated, and the OP sounds more like you’re studying a stage 1 philosophy course, and particularism is really not necessary to get into at that level and would almost certainly just add confusement rather than anything else.

I’ll perhaps try to add some more on this tomorrow as I am studying for my upcoming philosophy exams (not on this topic at all, but still…), and then I can justify surfing SD instead of reading whatever it is I’m supposed to be reading…

Confusement?! It really is time for me to go to bed. Obviously that should read confusion.

I disagree. Even act-utilitarians grant the necessity of using “rules of thumb”, so that they don’t have to make detailed computations for every little decision.

Feeling bad about doing something bad enhances utility to the extent that it discourages bad acts.

----- If the agent had to break the rule, they shouldn’t feel bad about it…

Well, maybe. “Feeling bad” creates disutilty. But “feeling bad” may also prevent future disutilty. After applying a suitable discount rate, our utilitarian will feel accordingly.

--------- Can you really conceive a set of rules sufficiently detailed to cover every possible action?

In theory? Certainly. In practice? No. Hey, I warned you that I’ve botched the argument.

Here’s another lead at what I’m getting at (or not): http://www.utilitarianism.com/ruleutil.htm

OBTW: I assume that if this question relates to a homework question then the posters here will be cited appropriately, ya?

Oh, and while we’re on the subject of references, here’s a link to The Stanford Encyclopedia of Philosophy.

I do not at all agree that a consequentialist will take into account feelings/emotions in their right action theory. When a utilitarian considers a moral quandary they will focus on what is the right decision, x or y? Not on the feelings of the agent. I do suppose, that as you are implying that feelings may have a place in the decision insofar as it could be seen as having positive or negative value in the ‘equation’. This is not the same as “the agent is supposed to feel bad about breaking the rule.” Feelings and emotions have no intrinsic value in utilitarianism. They do, however, have a lot more weight in virtue ethical theories.

What you are getting at with the reference to rule and act utilitarianism being extensionally equivalent is interesting. It highlights the immense difficulty of creating any sort of code of ethics. There always seem to be exceptions to a rule. This then leads to adding exception clauses to the proposed rules. This is where I start to like virtue ethics, in that it brings practical wisdom into play. Again, now I’m straying off utilitarianism, but just as an example of other possible theories. A good theory to look at is also Kant’s categorical imperitive for an example of how a code could look. This states that you should act in such a way that your action can become a universal law. I, personally like this way of stating a rule of action-guidance. It is sufficiently vague to cover all possible situations, yet sufficiently specific to give action-guidance. (Disclaimer: I really haven’t studied Kant very much at all, only touched on it in other areas, so can’t really say too much about this matter…)

I also want to add something about the example used in the link MfM gave to rule-utialitarianism. I get upset when non-moral issues such as which side of the road to drive on are brought into the moral domain as examples. It is not a moral quandary which side of the road to drive on when in the UK. These issues only make things more complicated and distort the issue in my opinion. It is patr of the reason it becomes so difficult to argue with some people over moral theories, because they will drag non-moral examples into play in order to discredit a particular theory. [End of that mini-rant.]

Getting back to the OP slightly, I realize that perhaps we haven’t actually answered your original question in a very clear way.
------Can anyone explain how a rule-utilitarian standard might avoid licensing a theft in this way?
I’ll try to put it slightly more clearly than what I see we’ve done in the thread so far. Although, this also means that it will be slightly over-simplified. An act-utilitarian would say that there is nothing intrinsically wrong with lying/stealing/etc…, but rather that only the consequences of the act are important. Because of our apprehension to saying that theft should become acceptable, the rule-utilitarian would create a rule stating that theft is wrong because it overall creates more disutility. This is the point where Kant’s categorical imperitive might come into play.

Also, to answer your question on whether anyone finds rule-utilitarianism an acceptable guide to moral decision making… personally, I must say that at first I really liked consequentialist theory. It seemed simple, straightforward, put in numbers get an answer type of theory to me. However, I started realizing that it really isn’t very simple at all, and rules are very difficult to formulate and there are just too many variables. I also don’t like the opposite extreme where there are no rules and we must look at every situation seperately. I do believe that there are rules which can be sufficiently vague, yet action-guiding. This is where I’ve started to like virtue ethics - I like the emphasis on feelings/emotions and character. So, basically, no I don’t find rule-utilitarianism particularly appealing any more. I think that feelings/emotions are a key part of ethics, and need to be accounted for in more ways than simply a factor in a utility function.

Finally, I’ll just second MfM’s suggestion on using the Stanford Encyclopedia of Philosophy. An indispensable resource for essay writing and general philosophy enquiries.