Are morals entirely arbitrary...

For one thing, it’s questionable that slavery ever was good for society, as opposed to being good for the slaveowning minority; “society” includes the slaves and non-slaveholders.

And for another, IMHO slavery being acceptable was less a matter of differing moral standards than it was about rampant hypocrisy by those who practiced it. How many people had no moral problems with slavery when they were the slaves? A moral principle that only applies when it favors you isn’t a moral principle at all; it’s an excuse.

In my opinion a great deal of the moral progress of the last few centuries hasn’t been so much a matter of people becoming more enlightened, as it has been the gradual elimination of that kind of hypocrisy.

I highly disagree with this for a couple reasons. First, it is true that nature has no goals since nature is just all the things in the world as a set, that does not mean that individual things within nature do not have a goal.

If you do not believe me, go find a hungry tiger and present yourself to him. The tiger will certainly have a goal, relieve the hunger, and will act upon that goal.

If you use the definition of moral as:

Then the tiger eating a meal when he is hungry is certainly moral, from the tigers stand point anyway. This is because relieving the hunger is right on a physical level. And running away from the tiger is moral from the humans perspective.

For humans, things get a bit more complicated as we obviously reason at a much higher level than animals*. Unlike animals we can, and often do, put our physical needs below our moral values.

Now, the moral values we choose are, or at least should be, based upon the way the world works. However, most moral systems are based upon a mystical foundation.

A rational reason to not run around killing people is pretty easy to get to, if you look at the world and start with the assumption that prolonging ones own life and procreating is a desired goal. Same with a whole lot of our accepted morals these days.

Certain other moral standards throughout the years have been injected for non-rational reasons. For example, ‘thou shall not boil a kid in its mother’s milk’ was probably a reaction to another religion which did use that practice. Someone didn’t like it (Ewww, Mom. Not kid boiled in its mothers milk AGAIN!) and tossed it in.

The fact that some morals have unsound reasons for coming into existence does not mean that all morals are meaningless.

Additionally, the results of a moral rule are important. For example, if we decided that it was moral to kill all children at the age of 6, well, in a short time we the results of our choice would be apparent.

Slee

*Some of us, anyway

But much of the slavery was making slaves of outsiders. When this is compared against just killing them, wouldn’t you agree that it is more moral? I’d say that the reduction of slavery has come from not considering those in other tribes or those who look different from you others.
The others might be of a different sex also, or sometimes of a different social class.

We can judge moral systems based on their internal consistency. I may not know exactly what a moral system should look like, but I would say it’s not a system at all if it doesn’t demonstrate reasonable consistency.

Of course, any moral system in use is imperfect and will have inconsistent parts, so this is not a binary metric. Just as obviously, it is possible to build a rather unpleasant moral system which is nevertheless consistent, so consistency should not be the only criterion.

As far as I’m concerned, morality is much like mathematics (like others have said). You start with a set of axioms that are arbitrary, but perhaps map to common-sense notions. For instance, I consider the golden rule a necessary axiom, since I think any moral system must be symmetric. We can then develop moral theorems based on those axioms.

That said, day to day moral decisions are usually too quick to apply this sort of reasoning. So any useful system must be simple enough that we can make robust snap judgments that are reasonable, if not optimal.

Not really, no. As a slave the suffering goes on and on and on. Kinder to just cut their throats.

This does not logically follow.

A moral system says that something is true or right, not that it is necessarily useful, efficient, or effective. If you point out that a moral principle has consequences the other person may not like, you have no in the least way shown that the moral principle is wrong. You have only shown that we may not like carrying it out.

That is inherently illogical statement. You may value consistency, but that only rationally applies if the moral system itself states that internal consistency is a value. Indeed, many moral systems accept and even state outright that a certain level of logical inconsistency is necessary, as the full complexities of morals cannot be accessed by limited, fallible human beings. Aside from which, the consistency of a moral system may not be specifically comprehensible if you do not understand or accept its moral principles, so your external judgment may not even be logically valid.

Incorrect. You are confusing the desire to do something with the cognitive assent and approval to do something. They have no necessary connection, and often directly conflict. The tiger, in any case, does not view his actions as right. In as much as we can determine, he does not view his actions at all. He simply does them. He is hungry, so he hunts.

You are confusing efficient action with moral value. A moral man can find many rational reasons to act as he does. An immoral man can find just as many.

For example, let us assume that the supposed goal of biological life is successful procreation. There are many ways to accomplish this, but your own statements have, as their consequence, that a moral man could destroy the lives of billions, provided that he was able to successful spread his genes far and wide (though presumably not the level of inbreeding). He might, for example, develop a lethal plague and exempt specific women and just enough men to create a stable genetic population with exactly the traits he happens to like - why not?

Your felt emotional reaction may be, “That’s monstrous!” Indeed it might be - I would certainly say so. But there is nothing biologically irrational about doing so.

The problem is that you view consequences through a specific moral lens; they are otherwise completely meaningless.

Except that’s exactly how many moral systems justify their principles, on being factually true or on their alleged results. Just because not all moral systems do so doesn’t mean that those that do don’t exist.

Yes, there are certain universal morals that seem to have deep evolutionary roots, and some that we can even see in our other primate relatives. Others, such as just how much leg a woman can show in public, do seem to be largely arbitrary although the idea that there is some limit does seem to be universal.

But basic morals such as: don’t kill, don’t steal, don’t be greedy, don’t boast, respect the elders, do seem to be universal.

Sure, among humans. Science fiction has done some interesting things changing those axioms and trying to figure out where a society may end up. But, closer to home, an advanced society evolved from lions instead of primates may not have the respect your elders piece, and one based on ants would probably have no problem with slavery (for the worker ants, anyway).

The “monstrous” example above, to me, is more evidence of some built in morality – I think to myself, even if that’s the most logical or most efficient way to pass along my genes or build a stable society, or whatever, it’s just awful to contemplate – why? Where does that come from?

Those thought experiments where you pull a lever to move a train from one track to another and save many people while killing one person also bring out some fundamental in-built morality.

Anyway, as I said, I don’t think this is in any way settled science yet, but my opinion is that we’ll find that lots of what we consider to be moral is in our genes.

I kinda fail to see how it matters what the specific prescription is on that moral lens, e’en bifocal; if a society kills all its children at age 6, that society will render itself extinct. That’s not a moral issue but a mathematical one.

Any Rand’s “Objectivism” claims that the basis for judging right and wrong is objective. I think she’s nuts, though she does make some good arguments.

But at the bottom, a moral code is generally based on a set of values, and values are simply not objective.

Some moral codes are fundamental: do this, don’t do that. Others are outcome-driven (utilitarianism is the ultimate in this regard). Still others are principle-based. The latter two are utterly dependent on values. The first is generally based on religion.

Existentialism argues that even if there were an omnipotent creator, we still must choose our own morals. We can choose to adopt God’s, but that’s a choice. Or, we can reject God’s. According to Existentialism, we’re not acting in good faith if we fail to think things out for ourselves.

It’s important to distinguish between morals (what we decide is right versus wrong) and our “moral sense”, which evolved. Our moral sense may tell us that it’s OK to do something that our chosen moral system does not allow. Pinker makes this very clear.

Right. Some kinds of moral systems can’t be totally arbitrary.

He said “we CAN judge”, not “we MUST judge …”. If we judge a system as inconsistent, we can call it inconsistent. That is an objective characteristic (of a sufficiently clearly stated system). It doesn’t make the ethical system right or wrong, but it does make it inconsistent (which makes it impossible to apply, which leads to serious problems).

But I agree with the careful distinctions in your post. Without clear distinctions like this, we end up arguing in circles.

On the other hand, with a few assumptions we can arrive at some fairly “universal” outcomes. I put universal in quotes, because they only apply to those who agree with the assumptions.

First, we agree that we’re being Humanists: what matters is human values.
Second, we agree that human life is precious, and that human suffering is to be avoided or minimized.

After that, things can go pretty wildly in different directions. For example, libertarians would add that the freedom to make choices for oneself trumps or at least counterbalances the second point above. It’s a reasonable stance, too: should we sacrifice ourselves so that others (who have done nothing to help themselves) don’t suffer? It’s possible that this point could be argued as a consequence of assumption two, though. But social liberals and especially communists reject this.

Even with fundamental disagreements like that, we can still come up with a lot of common ground, such as rape is bad, murder is bad, etc.

However, those of us who accept these principles find ourselves in disagreement with those who would cause suffering to others in the name of religion or culture (as they value the religious or cultural principle more than “human” values). And so we disagree over things like ritual female circumcision.

Neither side can claim to be objectively correct. The Humanist viewpoint requires fewer and more commonly held assumptions, but that does not make it “right”.

This is called “moral relativism”. There are many who strongly argue against moral relativism (since they feel it can be used to rationalize any kind of atrocity. They have good points, but they’re making assumptions, with which we may or may not agree. If you can’t agree on the assumptions, you can’t really argue the merits.

Right, but we’re talking about moral issues.

Ayn Rand agrees with you, though. Objectivism claims that survival is the fundamental value. Admittedly, without survival, there are no values left, so it’s not a bad argument.

But it makes the implicit assumption that survival is best under ANY circumstances, and that no other values matter in this regard.

But her moral code was based on a set of axioms that simply don’t mesh with what science tells us about ourselves, as a species. One might argue that she formulated her philosophy before we had a good understanding of human behavior from an evolutionary standpoint, but I didn’t see any attempt by her to modify that philosophy as new knowledge came to light in the 60s and 70s.

Yes and no. They are arbitrary in the sense that they arise out of randomness, but they are not in the sense that there is selective pressure that has created the sets of morals that exist in different cultures. All in all, morality is a form of meme. The difference is, while neither has predefined goals, they do converge onto certain concepts like survival. That’s where we got a lot of the more universal morals, like murder being bad. Societies simply cannot exist and spread for an extended period of time if they’re busy fighting and killing themselves.

Beyond that, though, societies are also able to actually define explicit goals for their society and then, in furthering those goals, they will converge on morals that differentiate them from other cultures. Consider the difference in morals between a society that values valor in combat vs. those that may value intellectual pursuits.

I like to use the analogy of comparing morality to a game like chess or whatever. The rules of the game are fairly simple, but the consequences of those rules create a large state-space defined by the choices of the players that is large enough as to make it computationally impractical to calculate the optimal move from a given state. As such, just as we devise basic strategies to determine generally good moves like, protecting and capturing valuable pieces, guarding the center of the board, forcing forks and pins, etc. All of these strategies are ultimately intended, without being guaranteed because of the computational complexity, to bring us closer to the goal state of checkmate. Thus, morality are equivalent to these sorts of general strategies, where the goals are those implicitly or explicitly defined by the society.

The interesting aspect, though, is that in a game of chess, it is theoretically possible to calculate all possible board states and thus convert these general strategies into actual hard rules for choices to achieve victory. Is it possible that we could extend this concept to morality itself. That is, we may find that some of our differentiating goals, like valuing strength, intellectual achievement, freedom and equality, purity, etc. aren’t all that unlike having some intermediate goals in chess like capturing the queen, and that those, in fact, were perceived intermediate goals towards a more universal concept of success of our individual culture, or more generally humanity? If that is the case, then it may theoretically be that if we could either compute or simulate to a sufficient degree of accuracy, then we could infact unveil a more universal goal and derive a universal morality defined directly from the nature of existence itself?

I’m not a fan; I just said that she made some good arguments. IMHO the counterarguments outweigh the arguments.

I don’t agree. “Don’t kill people from your own tribe” is very far from being a universal moral principle across all cultures. There have been many cultures in which killing members of one’s own family was permitted or actively encouraged. For instance, in ancient Greece, Rome, and many other ancient civilizations, it was viewed as just fine to kill one’s own children if they were sickly or weak, not the desired gender, or merely arrived at an inconvenient time.

Incest also has been considerable permissable in a few places, though not many.

I know you’re not a fan of evolutionary psychology. However, there really does seem to be a built-in moral sense (thanks, Learjeff!) and that would lead to some general moral principles (IMO) which would further lead to some pretty common morals across civilizations and time. I would be surprised if it is as simple as you lay out (“go ahead and kill any family members who are weak”), but I’m not about to do the research.

I think an interesting thought experiment would be, what would happen if we evolved from something closer to gorillas (where there is a clear inequality between the sexes) or from something like bonobos (where I believe there is less difference between the sexes than in humans). In human society, which, biologically, is somewhere between the two (men are somewhat bigger and stronger than women on average, somewhat more aggressive on average, for example), there is a mix of polygamous and monogamous societies, more or less gender equality, and so on.

However, if we had evolved in a line from gorillas, would nearly all societies be polygamous?

Anyway, it’s something to ponder. In my view, this implies that morals are entirely arbitrary in some universal sense, but are not arbitrary in a human context.

If we evolved from bonobos, we’d spend all our free time having sex with as many partners as possible.

… wait … hmm …

I don’t see why it has to come to a matter of survival. If you’re in a situation you find morally offensive or oppressive, sure, you can declare that you’d rather die than continue, but isn’t it better to live and work to modify the situation to better suit you, taking up slings and arrows against an outrageously armed sea of troubled heirs, as it were?

And if slings and arrows don’t work, you could, y’know… move away or something.