A while ago I got in a discussion with someone who claimed that absolute, objective morality exists. Sure seems like there’s plenty of evidence indicating that’s not true (abortion was OK to Catholics a couple centuries ago, today it’s not, slavery was acceptable to many in the southern US, and now it’s not), but he brought up a point that bugs me; if you claim that moral judgements are decided by a culture, or an individual, or in any non-objective manner, then you’re logically forced to accept anyone’s actions as ‘moral’, even if by your own morals they’re wrong, since those actions might be OK to someone else based on their different morals. It seems like he’s got a pretty good point-if I believe morals are decided by individuals, what reason do I have to get upset about someone else’s actions, as long as those actions are considered moral by someone else? Am I logically inconsistent, or delusional about morality, when someone cuts me off on the highway and I gripe about it?
I searched the web for this a bit and found some reasons to feel a little better, but am still curious about what others might think.
Morality is an abstract concept, and it varies over the population and even within a person over time.
Most morals are derivative of axiomatic morals. Like, part of the reason I don’t cheat on my girlfriend is due to morals derived from the rather axiomatic moral of “treat others how you’d like to be treated.”
So absolute, objective morality may indeed exist, but it would probably be a very small set of axioms like the above.
The thing is, most moralistic systems are derived and over-derived from that set and are incredibly complex, and are even dependent on a number of variables. It is incredibly likely - I’d say, it’s a given - that any person’s morals is in conflict with itself somewhere.
So to answer your question, I’d say you have the right - in a sense - to reject someone else’s morals if they are internally conflicting. And this happens in practice - it’s called hypocrisy, and is obviously frowned upon.
But, keep in mind that rejecting a certain morality because it conflicts with someone else’s system, doesn’t mean you should reject it for yourself. As the saying goes, even a broken clock is right twice a day; just because the clock is broken doesn’t mean the clock is always wrong.
Just because I admit people might disagree with me, I am not somehow forced to take on their opinion. If you find something wrong, then you find something wrong. Perhaps someone who disagrees might explain to you why they think differently, but you are no more compelled to think they are right then they are to adopt your view, that they are wrong.
Relativism does not mean everyone is equally right.
If by absolute morality, you many an actual “correct” code of right and wrong that is applicable to all people, then none of your “evidence” disproves this. It can just be interpreted to mean a lot more people were wrong in the past than our wrong today. The fact that the majority may have once supported slavery doesn’t mean slavery was ever right, it just means most people used to be wrong.
I don’t think anyone is seriously denying that our beliefs about right and wrong are influenced by the society in which we live. I doubt the person you were talking to was trying to say that. He probably just meant there’s a “right answer” about morality.
Not bad. Most objectively true morality derives from the Golden Rule, which applies, without much variation, across cultures and time.
Or, as was said in the Flashman series: “It has been said that Apaches do not know right from wrong, and this is why they do what they do. To everyone who says this, I have one answer: wrong an Apache, and see whether they know it.”
To expand on erislover’s point, your friend is tricking you into thinking like an absolutist about your relativist system.
Under an absolutist system, the morality of an action is determined by a map M sending each (morally relevant) action to either moral or immoral. So you might have M(lying to make money) = immoral, while M(lying to save someone’s life) = moral. In addition, an absolutist might follow a rule like
(1) If x is an action, and M(x) = immoral, then get mad when someone performs action x. Otherwise, don’t get mad when someone performs action x.
You, as a relativist, have a map too, but your map needs two arguments. Instead of sending single actions to one of moral or immoral, you send pairs. The first element of your pair is an action, but the second element is an individual (or culture, or whatever, depending on your brand of relativism). So, if we denote your map by M’, you’d have M’(using contraception, Tyrrell McAllister) = moral, while M’(using contraception, The Pope) = immoral. This assignment reflects the fact that in my moral system, using contraception is fine, while in The Pope’s moral system, using contraception is immoral.
Finally, you might have a rule that says something like
(2) If x is an action, and M’(x, jpdoyle) = immoral, then get mad when someone performs action x. If, on the other hand, M’(x, jpdoyle) = moral, then don’t get mad when someone performs action x.
There is no contradiction in following rule (2) and getting mad at the guy who cut you off, even if M’(cutting someone off, guy who cut you off) = moral, that is, even if cutting people off is perfectly alright in that guy’s moral system.
However, you friend is tricking you into believing you should follow some kind of “absolutized” version of rule (2):
(2’) If x is an action, and if for every single personP, you have M’(x, P) = immoral, then get mad when someone performs action x. Otherwise, don’t get mad when someone performs action x.
Then, indeed, you would be violating rule (2’) if you got mad at someone who cut you off when they thought cutting you off was perfectly moral. However, all you need to do is disavow rule (2’), and explicitly state that you follow rule (2). This completely defeats your friends criticism.
I don’t understand. It would seem that there is no significant difference between a relativist and an absolutist under rule (2), except for justification.
In both cases, you get mad if someone does something “immoral”. Only the absolutist attempts to base his or her notion of morality on some system that he or she thinks is objectively true.
The relativist, on the other hand, does not care. To him or her, many systems may be true, or not; but he or she gets mad if his or her system is violated, not caring whether the violator’s system has any validity at all! :eek:
How would this work in practice? It seems to me to be a formula for being awfully self-centred.
It’s not a question of convincing others. The issue is whether or not a relativistic stance on morality gives us the right to say that anything is wrong. Beyond whether or not I should try and convince you my view is superior, should I even believe that your view is wrong? Personally, I agree with jpdoyle’s friend that if things are only right or wrong for me, then the logial next step is that I have no authority to call anything wrong for anyone else, because it could be okay for them. A common or shared morality is the only basis for consistent social interaction. If all morality is relative, is there any action that is so reprehensible I should never perform it? If there is, what is your basis for believing that? It has to be something that is true for all people.
It is going to be hard to outdo Tyrrell McAllister’s great post, so instead of trying I will simply try to add to it.
Let’s look at the specific charge.
Then you would indeed have a universally applicable morality whose sole tenet was, “Whatever people think is right, is right.” If you find that view absurd, I assure you, so does every relativist I have ever met.
Rather, the important thing to note with relativism is that “right”, as in morally proper, is not a term that is universally derived: it is a universal term that is the result of a system of judgment. In order to use the word ‘right’, I must have made a judgment from some moral system. The source of “rightness” is such a system. Thus, unless your system suggests “everyone is right”, then there is no reason at all to conclude it.
I do not feel that everyone is right. I do not feel that I have access to a privileged moral system, though I do think mine is pretty decent, all things considered. Thus, I am perfectly contented with suggesting other people are wrong about some things, even if they disagree, and even if I know that they would call themselves “right”.
The source of a moral standard is then an object of some controversy, but that seems to be a problem different than the one under consideration.
I think you understand me perfectly up to this point (Though whether you consider the difference between rules (1) and (2) to be “significant” is a matter of opinion. I’d say the difference is precisely as significant as the difference between absolutism and relativism, however significant that is.)
More accurately, the relativist does not believe that any such objectively true system exists. Whether or not they care is a matter of individual psychology which in principle can be independent of their moral philosophy.
This sentence would only make sense to a relativist if you meant “To him or her, many systems may be adopted by various people or cultures, or not”. But, of course, none of the systems has the status of being the one true, objective, universal morality.
I think most relativists would reply that absolutists do the same thing. The absolutist will get mad if and only if the system to which he or she subscribes is violated. The absolutist just makes the additional error of thinking that his or her system is objectively the right one.
Can you give an example? By which I mean, can you give a concrete example of my rules (1) and (2) above being applied in the same situation to yield different results in such a way that the outcome of rule (2) would be “awfully self-centered” while the outcome of rule (1) would not?
Or do you mean that the rules will always have the same outcome, but in one case this outcome will be self-centered, but in the other case this same outcome will not be self-centered?
If that’s the case, then this becomes an argument over what constitutes being self-centered. I would argue that it is pretty self-centered to believe that one’s own moral system happens to be the one true objective system. But that’s really a side issue. Being self-centered is different from being logically inconsistant. The OP asked only for a defence against the charge of logical inconsistancy.
You are asking relativists to defend their moral views with absolute moral principles. But the relativists do not believe that there are any absolute moral principles*. Therefore, no such defense is possible.
Relativists do not see a problem with this. Now, you argue that such a absolutist defense is necessary because “[a] common or shared morality is the only basis for consistent social interaction.” Perhaps, but this is at best an argument for culture relativism (over individual relativism). It is simply an argument that people who interact should come to some kind of agreement, but it doesn’t argue that they should come to a particular agreement that is privileged above all alternative agreements.
*Note that the assertion that no absolute moral principles exist is not, itself, a moral claim, ie, a claim that something is morally right or wrong. Rather, it is an ontological claim. Therefore, it is not self-contradictory for a moral relativist to make such an absolute claim.
This isn’t true. I believe in absolute morality, but I do not believe that my moral beliefs are all correct. It’s certainly possible for there to be a universal morality code without me knowing exactly what it is.
That’s true. If I may state what I meant more precisely, the absolutist believes that, among all those systems of which he or she is aware, the one to which he or she subscribes has the highest probability of being objectively the right one.
What is the difference between the universally applicable morality you describe here and one in which the sole tenet was “Each person’s concept of ‘right’ is good enough for him.”?
In other words, I understand that “Whatever people think is right is right” is an absolutist’s way or phrasing the proposition. I’m not really sure in practice, however that it does not describe relativism pretty well. If you restate it without the reference to absolute right and wrong, for instance “Whatever people think is right, is close enough.” or some other such construction, I fail to see the distinction. It still says that any person should act however he thinks is right, and that you can only judge him within yourself.
Can you explain to me what I am missing? I mean that honestly. I think I am misunderstanding something in the way relativists think.
I have to quibble with this a bit. I suppose it depends on exactly what you think morality is. If you accept that morality is “a system of values and moral principles.” as you described in your mapping analogy above, then I think you can postulate a few such values and moral principles which are universal. Unless, of course, you are willing to drop the context of who is to value such principles.
What I mean is that in your analogy you proposed a set of mapping functions. These would map actions to moral or imoral based on the source of the function. However, the possible sources of such functions are not unlimited. They are, in fact quite limited in that only people (as far as we know now) are prospective sources. Unless you are willing to suggest that people are actually completely blank slates (biologically as well as morally), then you have to accept that certain characteristics of people might become characteristics of all possible sources for your mapping functions.
Furthermore, if you are willing to accept that morality is a system of such functions rather than simply a set of them, then you might have to accept that there are universal characteristics of any such system as well.
What I am saying is that if you divorce morality from people (attempt to come up with moral rules for when a big rock falls on and breaks a little one, for instance), then I accept that the decision to exclude universal morals is not a moral decision. Otherwise, however, I’m not sure it really is.
This may be true. But I don’t see how this follows from believing in absolute morality. Isn’t it just as possible for an absolutist to believe that the code he follows is not the best or most right one, but that it works for him, is easier, or even that he himself is not worthy of a more correct code? People do things for all sorts of reasons. In my experience the likelyhood of a particular action having been weighed against all other possibilities, and the various value judgments based on differing moral systems having been taken into account is very low indeed.
I don’t think the situation you described is possible. If an absolutist follows a course of action because it is easier (or he is not worthy…) than what he believes to be a universal moral code, then his system is no longer a moral one. To clarify, and absolutist might say that x is always wrong, and he might do ahead and do x because it is easier than not doing x. It is not the case that the absolutist is subscribing that doing x is ok, he is just commiting what he believes to be a wrong act.
Quite. But then, what does it mean for a person to “suscribe” to a particular moral system? If I believe that going to church regularly is the only way to live a good life, and yet I do not go to church for years at a time, what is my actual belief?
The map M’ I referred to above is supposed to be unique. I was not defining a set of mapping functions. Perhaps it seemed that I was defining one such map M’ for every moral-system-holding entity, but this was not the case. M’ is that map from the cartesian product of {x : x is an morally relevant action} and {P : P is a moral-system-holding entity } to {moral, immoral} which sends (x, P) to moral if and only if, according to the moral system held by P, x is moral*.
So, in light of this, I’m not sure what to make of your statement that “the possible sources of such functions are not unlimited.”
Leaving that aside, you seem to be saying that since there may be moral judgments on which everyone agrees without exception, the decision to exclude universal morals is a moral decision. Is that an accurate paraphrase of your point?
Of course a person takes many considerations into account other than moral considerations when they are deciding which course of action to take. So perhaps we should make a distinction between believing in a certain moral code and following that code. After all, people perform acts which they believe are immoral all the time. But I don’t think anyone is going to believe in code X if they assign a higher probability of correctness to code Y. That’s all I was saying.
*I’m aware that this is idealizing the situation somewhat. First, an individual may believe several mutually contradictory moral systems without knowing it. Second, a person might change his or her moral system over time. Third, most of us do not have a moral system so robust that it gives a definitive answer of moral or immoral to every morally relevant act. All these issues, and probably others, are being swept under the rug by my definition of M’ above. But as far as I can see, they are technical issues that can be addressed with minor tweaks.