What "kind" of morality do you believe in and why?

I’m putting this thread here, since I assume a debate will happen.

Here’s my take: I believe in a subjective morality because I only believe in subjectively held values, and my take on mind-dependent morality maps on nicely to my view of mind-dependent values. Though, I hasten to add I’m sympathetic to consequentialist views (especially classical utilitarianism), because I find myself often making moral judgments by thinking of the consequences my acts will/might have on others, and I suspect others do the same. That having been said, I still maintain that even my concern for both the consequences of my acts and the effects those consequences have on other people are still contingent upon mind-dependent values (in this case, mine), so I’m still a moral subjectivist, albeit one with a egalitarian consequentialist/utilitarian inclination.

And why subjective value? you ask. Because I can’t conceive of moral obligations existing for a person apart from a reason for that person to care/act in accord with those obligations, and I think only personal value provides such a reason. (Think of it as the old Socratic problem of moral motivation.)

That’s my take. What’s yours? And why?

Anyone believe in objective morality? Deontology perhaps? Consequentialism? Divine Command Theory? Or a different stripe of moral anti-realism than my preferred flavor, like, say, a branch of constructivism along the lines of contractarianism or moral relativism? I’m curious as to where we all stand and why. Feel free to get nuanced.

I believe that a broad, basic, low-level morality is our heritage as a species of “social animals.” We evolved behaviors that promote group survival, and many of these are the foundations of our higher “good.”

Cooperation. Mercy. Empathy. Pity. Sharing of resources. Caring for the aged and the ill. Protecting the young. Play.

Some of our evolved behaviors are…not so nice. Xenophobia and racism.

But that leads to the second great source of our morals: our intelligence. We’ve figured out that racism is not good, and we are fighting against it, even though it may have an innate evolutionary role in our animal make-up. We’ve learned that polluting is bad, even though we evolved in an environment that was so big and so broad, we could dump our trash without concern.

We’ve transcended our evolution and employed our brains.

Speaking personally, I follow “negative utilitarian” ethics. By and large, my idea of “good” is to reduce the world’s suffering as much as possible. “Bad things shouldn’t happen.” (Standard utilitarianism seeks to maximize the world’s pleasure, and this is also a good idea. But for me, pleasure is hard to value when I know how much suffering there is.)

Enlightened Epicureanism is also admirable.

If you’re a noncognitivist, moral skeptic, or an error theorist feel free to identify yourself and explain why you don’t believe (or disbelieve) in morality.

Is this an objective claim or one that’s relative to you and people similarly situated? Note: I’m only asking, not arguing. :slight_smile:

It’s interesting you make pleasure something other than the inevitable consequence of the absence of suffering. I’ve always thought of pleasure and pain as mutually exclusively appearing when there’s a vaccuum of the other. As in, it pains me not to have the pleasure of X in my life, and it pleases me not to have the pain of Y in my life. By my moral lights, reducing one increases the other. Just my two cents.

I agree with some Epicurean thought, so I sympathize.

I believe in ethics more than morals, since morals seem to be usually linked to a non-rational justification of them. Ethics has to accept that some of our morals seem to be baked in by evolution, and evaluate which of these makes sense in today’s world and which do not.
In ethics the argument is always continuing, and I don’t even try to propose that I or anyone else has the final answer.

I think you’re making a false distinction there, since as far as I can see “ethics” and “morals” are more or less interchangeable - one has a Greek root, the other a Latin root, and that’s probably the most significant distinction between the words.

And I’m not sure that I grasp your point about a “non-rational justification”. All rational argument proceeds from premises which may themselves be rationally deduced from other, preceding premises. But when we are making an ethical/moral argument, don’t we always ultimately drill our way down to a fundamental premise which cannot be rationally demonstrated to be true, but which we nevertheless accept (and, frequently, invite other to accept as well)?

I’m like Harvey Dent - I base my moral choices on coin flips.

Overall, with the emphasis on social evolution, this sounds like some variety of pragmatic ethics.

Personally, I’d definitely not a deontologist or a virtue ethicist. While I can see the value of a consequentialist stance, such as some flavours of utilitarianism (preference utilitarianism, for instance), I actually lately have come to strongly favour ethics of care. And essentially, pacifism underpins all my other moral stances.

Why do I have to believe in just one kind of morality?

I’m sure the OP’s cool with you listing the various moralities that you feel apply.

All of them, at one time or another.

I believe in Kantian ethics: moral rightness comes from duty, and right or wrong is judged primarily through intent rather than consequence.

Not sure how I would fit into this, but I believe in ‘universal love’. Love is the guide in one’s life to the degree that they understand how to love others, but also we all are being instructed by ‘parental beings’ usually called angels who also have this belief of universal love, and they are the children of Archangels, etc.

As such it is the intent of the heart that matters, not the observed action or assumed intent, of even the intent of one’s mind. Love as best as you know how and the higher beings will make up our shortfalls as well as instruct us.

So on a observed level, morality would be subjective, and also does not follow rules that can be codified. But on the level of the heart, morality would be loving as best as you can, so universal as well.

That’s interesting. I’ve heard of ethics of care, though I’ve never studied it. Perhaps I’m mistaken, but I thought ethics of care was the development of an attitudinal disposition very similar to virtue ethics, but you seem to see great difference between the two. Could you explain how they’re different approaches to ethics? Also, if you don’t mind my nosing around a bit, it sounds as if you have two separate sets of ethical stances: those that fall under the category of the ethics of care and those that fall under the category of pacifism. Is there an underlying principle of unity that makes both ethics of care and pacifism appropriate, and, if so, could that be your actual approach to ethics?

What’s the experiential or epistemic difference between the intent of one’s heart and the intent of one’s mind?

Sounds Kantian alright. Though, I’m curious, since you explain the origin of moral rightness (in the above assumption, that origin is duty), do you also believe that there’s a need to explain the origin of duty? If so, do you have any meta-ethical candidates for that origin?

To me the heart and mind are two separate ‘entities’ that allow us a degree of free will. The heart (equal in Hebrew to the word soul), is the eternal the storage of eternal truth, and the mind is the temporal storage of knowledge.

The difference, as I understand it, is what you place first, with you mind you can rationalize why or why not to do or not do something, with your heart there is a inner knowing of why or why not to do something.

So using your mind you may deduce that someone is genuinely in need, in your heart you may feel that this person is not one you should help, this applies both and all ways.

Following your heart beyond your intellect (which may mean follow what you know beyond what you have been taught and even into trouble), I do believe is the key.

Double post. Sorry.

Nope. Unlike Kant, I reject the idea of God being necessary for morality to exist. Other people being humans like myself, with intellect, emotions, hopes, and the capacity to suffer or prosper just as I do, is sufficient for me. The duty I owe others is nothing more or less than the duty I’d have them owe to me.

I arrive at my belief in human worth and equality through observation alone, no soul or God or utopian ideology required.

This is something I’ve given a great deal of thought to in recent months, I’m not really sure how this lines up with other perspectives, but here’s how I’ve broken it down.

We can theoretically model our decisions as a tree, with nodes depicting a given state in which one is faced with a moral decision and the edges representing the choice made that leads inevitably to the next moral decision. It’s not a perfect model, but it’s good enough that we can apply concepts of min-maxing like in a state based game. Thus likening it to a state-based game like chess, where the best move in chess is the move that best min-maxes one’s chances of winning, the most moral choice is the one that does the same for a particular moral goal.

Unlike a state based game, however, there is no single objective goal state, like checkmate in chess. Fortunately, that’s not really of much concern because even in a game like chess with a definitive goal state, it’s usually not possible, at least in interesting states during mid-game, to calculate a path from the current state to all the possible leaves and then determining the best path. Instead, we’re stuck looking a certain distance ahead with heuristics and estimating states that look good and calculating those paths instead. It’s these estimations and heuristics that approximate the moral guidelines that we put in place for ourselves that are ultimately derived from that end goal.

As such, our moral choices can be likened to the sorts of skill levels we see in chess masters. A novice player understands checkmate, but probably just assigns various values to pieces and doesn’t look very far ahead. A more advanced player will have more complex ways of evaluating states and look farther ahead. And a theoretical machine with sufficient processing time and memory can calculate an optimal path, an objective morality for a particular goal state.

So, really, I don’t think it’s all that interesting to discuss the rules themselves, as they necessarily derive from a chosen goal and our ability to both understand the consequences of our choices toward maximizing the approximation of that goal in future states and, thus, in coming up with effective estimations, rules, for achieving that. So, it’s tempting to put a goal state of something moral sounding like maximizing happiness or minimizing suffering or something along those lines, but those themselves are derived from our innate morality which evolved as a way of preserving our species and our society.

And the conclusion I’ve reached is that morality, like evolution, has no “goal”, per se, but we end up with seeming rules nonetheless, and the best goals I can estimate are ones either along the lines of survival, or more interestingly, freedom. Less restriction on the states, more options when faced with a moral choice, means that essentially regardless of what the “goal” is, or even if there is none, that will typically result in a better approximation.

So, our morality evolves not unlike we do. And this explains why morals and ethics continually change, why some things may have seemed fine at a given time in the past but are morally reprehensible today, and also how we can blaze new grounds in morality as technology and culture introduces us to new situations we’ve never seen before. It always seems to end up, in the long run, going toward greater freedoms, both in our choices and in our lives.