Are morals entirely arbitrary...

and simply creation of a culture?
In other worlds can you meaningfully argue that one moral system is better than another or do you have to make completely arbitrary choice of one moral system and you are only capable of developing other laws based on that system but it is impossible to prove that your base is right (you can only assume it is)?

Yes, morals are arbitrary, in that they are contingent on human will, and nothing else.

In a vacuum, no, you cannot, because the criteria with which one judges a moral system is part of that moral system; there is no external, objective test.

However, one may compare moral systems to test the outcomes each are likely to produce; though grading the worth of those outcomes, once again, requires a moral stance. For instance, I could reasonably state that a moral system based on the value and rights of all individual humans would produce a more peaceful, just outcome than a system based on the idea that only a select group are “real” humans, and all others are lesser beings. Stating that the first system is better, though, requires me to accept that peace and justice are worthwhile aims for a moral system, as opposed to the rule of a few, and that in itself is a moral position.

Yes, it is impossible to objectively prove that the base is right.

All morals designed to further a desired outcome are arbitrary because al desired outcomes are arbitrary.

I think there’s been lots of research that shows that morals, or some portion of our moral framework, are built in to our psyche. I’ve read books by Stephen Pinker (maybe the Blank Slate?) that delve into this subject. Also, I believe I’ve seen studies that show morals at work in chimp and bonobo societies.

So, I would say that they are not arbitrary. Also, there are some basic morals that seem to work across culture, and this points to some built in morality. For example, don’t kill people from your own tribe, incest is bad (mmkay?), and there’s probably others I can’t think of right now.

ETA: There is some food for thought here: It doesn’t seem to be a settled issue by any means.

But does it mean that any moral discussions (for example about abortion or death penalty or whatever…) are in the end pretty much meaningless? In other words it simply matter of choice not reason?

The more or less universal morals probably got selected for. Clearly a tribe where most of the members have no moral qualms about killing other tribe members is not going to last for long. And just as clearly tribes with no moral qualms about killing the others (outside their tribe) lasted for a very long time.
That some sociopaths get born without even this most basic of morals shows that it must have some genetic component.
But lots of morals can get set in one environment and carried over to another, where they seem arbitrary.

Not usually- because most people can find a common “baseline” of morality to start at- something like “hurting people is bad, helping people is good”. If two people agree on that, then there’s lots that can be reasonably and logically discussed.

Only if you assume that meaning must be grounded in some extra-human entity.

However, if you accept that meaning itself is a human construct, then moral discussions are meaningful entirely to the extent that we make them so.

It’s like how money works. There is no inherent worth to a dollar bill. It only has worth because we choose to treat it so.

Lets compare moral system to set of axioms. In mathematics you can discuss (analyze) them and formulate laws. That’s what I mean by ‘meaningful’.
But if two persons have different moral systems (different set of axioms) they can’t get the same laws and discussion between them is basically senseless…

Only if they are truly incompatible systems, which is possible but unlikely. More often, you’ll have basically a Venn diagram, with considerable overlap. Then you get into what iiandyiiii described, working from the baseline to the nuanced, practical applications.


“It’s wrong to hurt people.” “Agreed!”
“Except for self-defense.” “Agreed!”
“But the threat must be immediate, and the response proportional to it.” “No, once you’ve threatened someone, your rights are negated, all bets are off.”

And so on.

As Human Action says, if two people disagree about even the most fundamental moral questions, then, yes, discussion is senseless:

“I think babies should be protected!”
“I think babies are an excellent source of protein!”

But most disagreements about morality are not so extreme. Most of the time people agree on a few basic principles – “suffering is bad” for example. In which case you can reasonably argue about which rules do the best job of accommodating those principles.

You can also go the other way; I’ve often agreed with a person’s conclusions on matters of morality, despite strongly disagreeing with the fundmental reasoning that got them there.

Entirely arbitrary, no. As said, there appears to be a certain amount of built-in moral impulses for humanity. Also, we aren’t bodiless blank slates, we have a biology and built-in psychology that we all share. That serves as a non-arbitrary basis for moral reasoning; it’s not “objective” the way that math or physics is, but it’s not something we made up and can change at whim, either. These can all serve as the axioms for our moral systems.

And added to that is cause and effect. If we want a particular result, we need to take the actions that will cause that result. If our innate nature means that we don’t want to be robbed or enslaved or raped or murdered, then some moral codes are going to be much better at achieving those goals than others. That isn’t arbitrary either.

One of the two examples used by the poster you’re responding to, namely abortion is as extreme, though, people not even agreeing about what an embryo/fetus is. If your think it’s akin to a tumor, there can’t be any meaningful debate with someone thinking it’s akin to a human

Another similar and common example would be animal rights.

Well, there are basic moral traits that have evolved along with the social animal that is Homo Sapiens, and that might imply that certain core moralities are helpful to the survival of a social species, but that’s about all they would imply.

That implies that there are, at a minimum, moral axioms that psychologically healthy people have. And, societal norms basically stem from those.

I’ve wondered if a sentient species evolved from some other type of animal, say a lion or a praying mantis, if it would be considered moral for a new husband to kill off remaining offspring from a previous marriage, or it would be considered deviant to mate with someone and then not kill them.

Even the concept of “psychologically healthy” is itself a value judgement. Nothing in nature can validate any moral system, since nature has no goal or inherent value system. Therefore, no fact of psychology or biology can ever directly affect a moral discussion, although a specific fact may be relevant to it.

Sure they can. They can demonstrate that a particular moral principle does or does not work as advertised. That does directly affect any moral system or discussion that has any concern for consequences. They can also undercut any moral system that’s based on false claims.


Consequential morality is based on judging actions by their effects. It is not purely arbitrary, even if the axioms we use to define desirable effects may seem so.

As for what society considers desirable effects, there are constructions of moral priorities that are more sustainable and tend to survive, and ones that are less sustainable and tend to fall down.

The pretense at least of monogamy, respect for lawful authority, education of children as a virtue–these are more sustainable positive morals for the most part.
Avoiding arbitrary murder, for the most obvious example, is a mostly sustainable negative moral.

Socially transmitted Morality tends to follow that which reinforces the society’s growth and existence.

Then of course there’s Objective Good in a philosophical sense, which is less about what promotes your society than what ought to be. Just because something is a workable dominant moral, and thus Moral in a cultural-relativist sense, doesn’t mean it’s objectively good.

Slavery was Moral, and reinforced its society’s success. It came to be seen as socially evil, and became Immoral. Is it objectively evil? Arguably. And that argument is made from something other than the received standards of social tradition.

I think it’s fair to say that the Good is perceivable, with difficulty.

All morals are arbitrary, yes