If Moral Presumptions Are the Problem...

…then what is the solution?

In case you don’t know, I am referring to moral nihilism. Here is the Wikipedia article on the subject.

As you can see, some philosophers consider ethics neither true nor false (I have a slight problem that that, however…). And some philosophers consider ethical statements to be categorically false. In other words they apparently have a problem with it.

When people have a problem with a theory, they usually offer up a solution. People who are against global warming denial have a solution, in other words the theory of climate change and global warming. My question to moral nihilists (and I suspect we may have one or two on the boards–please chime in:)), if morality is the problem, what is the solution? I mean, if humans shouldn’t live morally, how should they live? In anarchy hunting and killing each other?

I may be jumping to wrong conclusion. But as you can see even by the Wikipedia article, some philosophers (like Mackie) find every ethical statement false. In other words, they are anti-ethics, it would seem.

Also, I may have asked a similar question on these boards. But either I never asked this one specifically, or my question wasn’t answered to my satisfaction.

Keep in mind too, as I have said many times before, I do have a very open mind. I just don’t understand the ethical debate.

:):):slight_smile:

You don’t see the contradiction here? Moral nihilism = there are no "should"s.

I think there can still be “shoulds”. Take the statement, “You shouldn’t stick your finger in that light socket.” It isn’t really an ethical statement, you are simply going to get zapped if you do that, and I’m assuming you don’t like sudden jolts of pain. There’s a reason you shouldn’t.

A masochist might enjoy the pain of being zapped. You’re just imposing your reasons and values on others. Which means ‘People should do what I think is right’. I detest the horrible ethanol burn I get from drinking strong spirits. I assume that no one else likes ethanol burn. Therefore, there’s a reason people shouldn’t drink spirits.

Morals (as in ‘what is the right thing to do in our dealings with others’) have to have a subjective justification. The justification depends on people’s preferences (and at last count there were 7+ billion people on this planet). You work out what are the most common bases - a fairly simple starting one is ‘Anyone can do what they want, provided it doesn’t negatively impact someone else (and the person who is impacted is the one who decides whether it is negative or not)’. So - let’s talk about this. Let’s compromise. Once we have some bases set and agreed by everyone potentially impacted, we can then have ‘shoulds’ - because they relate to the agreed bases for having morals. Humans are social animals - we have come to some commonly agreed justifications regarding living in a society across most cultures, that make our lives better. Without those, our societies would be greatly different. You can be a member of many different societies, and the moral bases can vary between these societies. But they need to be agreed upon and accepted. And, yes, they change over time.

Without an agreement on WHY we want to have ‘Right or Wrong’ - which is usually some form of ‘Let’s not hurt or upset the other people in our society too much’ - we can’t have any objective ‘shoulds’.

It’s very much an ethical statement. Whether you value your life or consider it disposable is central to ethics.

Okay, to a moral nihilist, there are conditional "should"s, like “If you want to get to the post office, you should make a left turn at the next traffic light,” or even “If you don’t want to get electrocuted, you shouldn’t stick your finger in the light socket.”

These shoulds tell you how to get what you want, but they don’t tell you what you should want, or why.

Well, the Bible is nihilistic too. Here

But in terms of what you should want, or why, it is a reason to worry about what God wants.

The natural philosophy nihilist (maybe) won’t include God in the calculus. Still, if you think everything is “chasing after wind”, one could argue that being “good” is as close as one can get to having any meaning in life. It is the escape from nihilism. See? Nihilism is the reason.

Nihilist: But you don’t get it. There is no “good”, objectively, so it is meaningless when you say to be “good” as opposed to “bad”. Those words don’t refer to anything real.

Try2B: Not a priori, only after the fact. Do something justifiably good, and when you get to the end of it, there won’t be a nihilistic universe, at least not for you. For instance, Tom Brady has a considerable disposable income. Should he buy a pound of coke and make a game of seeing how many hookers he can score? Isn’t everyone “better” off if he doesn’t do that?

Nihilist: No. There is no “good” or “bad”.

Try2B: Maybe you should read the Tao Te Ching then. Everything is meaningless and chasing after the wind? Can’t decide what you should do? Ok, why don’t you practice not-doing, then?

Nihilist: Stop telling me what I “should” do.

Try2B: Fine.

The good vs bad morality just adds an unnecessary and confusing layer to everything.

Most humans want to get on with other people, be safe, be happy, enjoy life. They are perfectly willing to make agreements about not killing each other, working together etc. to get what they want.

Looks like a terrible example. Many people including me would dispute the idea that everyone is better off if he doesn’t do that. I see nothing immoral in snorting coke and scoring hookers, and since all people involved (Tom Brady, cocaine seller and prostitutes) are entering the transactions willingly, presumably they all think they’re better off with him doing so. Which leaves us with people not involved, and I don’t see how they’d be better off if he didn’t do these things (apart from the satisfaction of knowing that someone doesn’t do something they disapprove of, but then again I disapprove of people cultivating coriander and insisting on putting this vile thing in perfectly good food).

Well, then a better example might be “should Tom Brady use his wealth and the power it brings to reduce someone else to sexual slavery for his own gratification”? Obviously the people involved there are not entering into the transactions willingly and there can be no presumption that they think they are better off.

Presumably the moral nihilist would have to say that there is nothing wrong with using power you may have to enslave others, seize the things you want, etc etc. And presumably most of us would be uncomfortable about agreeing with that proposition.

It’s not a matter of being comfortable with a proposition or not. The problem is : can you demonstrate objectively that something is “morally wrong” without relying in turn on other assumptions about morality? And the fact is that you can’t. Even the seemingly extreme example you provided would have been considered perfectly fine morally in many societies ("Enslaving the women of the other tribe across the river and plunder their goods? Of course we should!). To oppose it on moral grounds, you have to take for granted that the freedom/life/property of people of the other tribe has as much value as that of the members of your tribe.

Or, from the opposite point of view, “antispecists” argue that the life of an animal has exactly as much value as that of a human and that by breeding them for meat we’re committing an horrendous and barbarous crime. If, one century down the road, they win over, your worry about enslavement might very well be seen as hypocritical and absurd, because how could you be so bothered with enslavement when slaves, at least, aren’t slaughtered and eaten, making enslavement a very minor issue by comparison with the horrors of meat eating? How do you demonstrate that they’re wrong and shouldn’t give the same moral value to a cow and a human? And even this assumption isn’t an absolute. We still give priority to ourselves, our family, our nation, and few people object to that on moral grounds.

Your basis of operation : “humans are all equally important but animals aren’t” can’t be demonstrated. It’s cultural and rather arbitrary, assumptions have been different in other places and times, and might become different too in the future. And of course, even with this basis of operation, we follow way more arbitrary moral rules. For instance there was a debate in the pit some months ago about the following hypothetical : someone spies secretly on a woman taking a shower from a nearby house. For magical reasons, neither the woman nor anybody else will ever know about it. Is that morally wrong?". Most people would say yes, even though absolutely no harm is done.

Moral rules are arbitrary, indemonstrable and subject to change. They don’t have an objective existence outside of a specific human culture. They appear obvious to you very clearly because you were born in this culture and you’ve been conditioned to see things this way all your life. Even when you try to analyze them, you do that on the basis of other moral rules, equally indemonstrable. As opposed to, for instance, the law of gravitation, that isn’t up to debate.

There might be some biological basis for some “moral” behaviors. For instance, it seems that chimpanzees have an issue with freeloaders. So, we might have some instincts that direct us towards certain behaviors. Or at least to dislike some “immoral” behaviors in others. But even then, their scope is going to be very limited by comparison with all we consider moral (chimpanzees won’t have an issue with killing an ape from another tribe), and even then, the existence of such an instinctive behavior doesn’t mean that there’s something “wrong” about not acting in accordance with it. For instance, most people have empathy that tends to prevent them from doing a number of “bad” things (but then again, it often doesn’t). But on what objective basis should we expect a psychopath, born without empathy, to follow the same behavior?

So, indeed, in my opinion, morals have no objective existence. Unless you assume they’re transcendant, a point of view that many believers would adopt, but that is hardly provable.

Indeed. All logical demonstrations have to proceed from premise or premises. And if you seek to have the premises demonstrated, well, that demonstration in turn has to proceed from its own premises. And sooner or later you get down to axiomatic premises; premises which cannnot be demostrated but whose truth/validity is assumed.

But this isn’t a problem unique to ethics; it’s true of every epistemology which seeks to be rational. Even the scientific method ultimately depends on unproven and unprovable axioms.

What do we do with people who refuse to proceed from the axioms we wish them to proceed from? We usually just deride them, but that may not be an adequate response when their axioms lead to conclusions which allow them to perpetrate slavery, genocide, etc, possibly against us, or when their axioms allow them to deny the reality of climate change, with consequences which will disadvantage us in material ways. In such situations we generally allow ourselves to defend our interests by force, but of course this too we justify by an appeal to unproven and unprovable axioms.

I think it’s pretty different. The universal validity of the laws of nature is quite demonstrable. The sun will reliably rise in the east. While morality is rather asking : “is it a good thing that the sun rises in the east rather than in the west”? Similarly, we could, in theory, with enough knowledge, deduce all the consequences of an action. If I stab you in the heart with a bayonet, for instance, we can deduce that you’re going to die. But whether or not this was a moral action on my part is open to debate. And, depending on circumstances, it can be seen as moral or immoral action.

I would say that historically we have tended to do much worse than that, in general.

And even here, you’re arguing mostly from the point of view of self-interest : genocide against us (same with climate change). But maybe genocide against others is a good idea? Who knows when they might come to kill us if we let them live?

But anyway, yes, it’s based on unproven and unprovable axioms. That’s the whole point. And few people will admit that these axioms are arbitrary, or that the reasoning leading from the basic axioms to the moral conclusions could be entirely wrong. They will admit that in theory their moral beliefs couldn’t be any better than others, but in the same way that they’ll admit that there might be, in theory, a teapot orbiting Jupiter. They’re fully convinced that in fact, their moral views are objectively superior, while there’s no such thing as objective morals, and even assuming an axiom, at pretty much every turn in the chain of deduction, there are new arbitrary moral choices that are made to come to a conclusion because in most cases we don’t have enough knowledge to be sure that a given action will lead to a given outcome (and I’m talking on basic moral beliefs shared by mostly everybody in a given society, like “slavery is wrong” rather than “smoking pot is wrong”).

If I understand correctly (and I’m not at all sure that I do, but I’m basing this on the Wikipedia article the OP linked to) moral nihilism goes beyond this. To a moral nihilist, there are no moral rules or judgments, not just no objective ones.

It relies on axioms that empirical evidence is both ‘real’ and ‘reliable.’ Empiricism though easily acceptable is not without criticism and ultimately relies more on axiom than some sort of unshakeable knowable bedrock.

Yes, but this is a further step that applies too to actions. For instance, it only appears that UDS dies when I plunge my bayonet in his heart. Maybe he doesn’t. Maybe he doesn’t even exist, and he’s a figment of my imagination, for instance. If we can’t rely on empirical evidence, we can’t assume that the sun actually rises in the east, but also we can’t assume, not just that there isn’t an objective morality, but simply that our actions will have the consequences we expect, which makes pointless to try to determine if they’re moral or not.

So, we need empiricism to be valid to know what the consequences of our actions will be hence to even begin to discuss morals. And with this assumption, the sun reliably rises in the east, and UDS dies when I bayonet him, but we still have no objective way to answer questions like : “Is it a good thing or not that the sun rises in the east?” or “Is it a good thing or not to kill UDS?”

Well, I’m not knowledgeable about philosophy, so I’ve difficulties expressing it clearly, but it seems to me that if moral rules are totally arbitrary, it means that “morality” doesn’t really exist.

To begin with, what is “morality”? What does it mean when we say that something is “right” or “wrong”? Typically it means something like : “it goes against some other, more fundamental, moral rule, whose validity I take as granted and expect you as well to take as granted”. And if we then consider this more fundamental rule, and the rule it itself is based upon and so on, we finally end up with something like : I arbitrarily decided to call this “wrong” and that “right”, and things I call “wrong” I don’t want you to do. “Wrong” and “right” don’t have any meaning without an existing moral system, even though most people tend to view their meaning as self-evident, as if actions had some inherent “rightness” or “wrongness” quality to them. Or at the contrary as if “wrong” and “right” were transcendant and existed outside of a human construct.

If “morals” just consists in arbitrarily, randomly, putting a “wrong” or “right” label on actions, and proceed from there, it seems to me that there isn’t really such a thing as “morals”. I’m not sure I’m clear.

Related to this debate. Just today, I read here someone (can’t remember whom, and not even in which thread) who expressed his belief that we’re making progress in our understanding of morality. That one century ago, for instance, we still didn’t grasp morality fully but we’re becoming better at it. As if morals were something objective, that we can discover in the same way we’re discovering physical laws, for instance. But it couldn’t be further from the truth. It only appears so because we’re conditioned to find our current moral values correct to such an extent that what is right and what is wrong seems obvious, and we hardly ever reconsider it. So, obviously, the closer we are to our current time and our current values, the more “correct” it seems and it looks like “progress” towards some ideal moral system.

But tomorrow, this “progress” could lead to a set of value that pretty much everybody would reject today, where, say, pedophilia is good (“how ignorant they were, believing that it was harmful for children when it’s so obviously necessary for a healthy development! And you wouldn’t believe how violently they persecuted people for their sexual preferences”), eating meat is evil (“how they could gorge on the rotting flesh of innocent creatures they deliberately killed for this purpose is beyond me. How could they be so evil?”), killing babies at birth is good (“why would anybody be morally obligated to care for 18 years for someone else just because they had sex? Doesn’t make any sense!”), individual freedom is evil (“They were incredibly egoistical and self-centered. They, simply put, thought that their petty individual preferences were more important than the common good and the well-being of millions”), etc… And what would these people think? Obviously that they’ve been making progresses in their understanding of morality, and that their views are obviously vastly superior to ours. And on top of it, I picked a set of values that, despite running against our own, is still based on more or less the same assumptions we have, just changing the conclusions reached.

Your last point assumes that the morality of an action depends on its consequences, which it a contested claim in ethical discourse and, more to the point, is itself an unproven and unprovable assumption.

It’s obviously the case that the axioms on which the scientific method depends are different from those on which ethical systems depend, but this doesn’t mean that accepting them is any more reasonable. Ultimately, we accept the axioms that underpin the scientific method because they work well to make sense of our perceptions of the universe, but this doesn’t necessarily mean that they are objectively true. It could equally be thatour perceptions do not correspond to any objective reality but are, say, generated by algorithms and these axioms - or something very like them - are embedded in the algorithms.

And it could be that we accept fundamental moral axioms for similar reasons - they work well to make sense of our experience of ourselves as voluntary agents, and as a framework for answering questions about meaning, value, etc, in life, which are needs we experience. Humans are social animals; we need functional social relationships in order to flourish; ethical systems help us to develop and foster these. We study the physical universe (and rely on the scientific method to do so) because it’s dangerous to us not to understand it and potentially useful to understand it; we develop ethical systems because, in general, we get by better with ethical systems than without them. Our scientific understanding and our ethical systems may be more or less flawed, of course, but even to say that assumes an ideal unflawed scientific understanding/ethical system. An unflawed scientific understandign would correspond exactly with the reality of the external universe that we presume to exist; an unflawed ethical system would be the one that best reflects our nature as moral agents with a need, and a capacity, for interpersonal relationships. And we have at least as much reason to accept the reality of that nature as we do to accept the reality of the external universe.

I don’t think we’re talking about the same thing, with the word “consequences”. I think you mean for instance the difference between intent and actual consequences. But intent relies too on an expectation that a certain action will reliably result in a certain outcome. Lacking that assumption, I can’t see how you could form any moral system. If there’s no way to predict what the situation could be after this action, you can’t pass a moral judgement on it, regardless of your moral system. Any moral system has to rely on the assumption that empiricism provides valid results and that our perception of the world is accurate.

It might be. But if our perceptions do not correspond to any objective reality, any moral axiom is baseless too.

The difference is that while if we accept that our perception of reality is correct, scientific knowledge produces reliable and verifiable conclusions, morals don’t. That would be because they’re are situated at an entirely different level. As I already wrote, the equivalent of a moral axiom isn’t a physical law, whose correctness can be objectively verified for instance by noting that the sun rise in the east according to prediction. The equivalent is a value judgement about whether or not it’s a good thing that it rises in the east, which can’t be verified. And the other way around : the equivalent of a scientific law is the knowledge that if I plunge my bayonet in your heart, you’re likely to die, which we can verify empirically, not a moral axiom from which we could deduce whether my action was right or wrong, which can’t be empirically determined.

What you say is true. We need frameworks to function, and not only in matters of morals. I need to know if extending your hand in my direction when we meet each other is a gesture of agression or friendship, for instance. I do not deny the usefulness of a moral system. But many systems will provide such a framework. And there’s no way to determine which one is “better” because any way of measuring it rely on us arbitrarily deciding what outcome is more desirable. For instance, you mention that a moral system allows us to flourish. But there are people, currently, who seriously support the idea that the moral thing to do at this point would be to let the human species go extinct. Within this framework, obviously, “flourishing” isn’t a desirable outcome while “destroying all human societies” is.

And of course, we need much more than a single axiom. “Let’s flourish” isn’t going to be sufficient as the basis for a moral system, for instance. And the more principles we add, the more arbitrary the moral system becomes. And the more we have to weight them (subjectively) against each other. Two societies both arguing that they’re following the same basic moral axioms will easily end up so being so different from each other that they’ll judge the other morally abhorrent.

I’m not arguing that we shouldn’t have a moral system. I’m arguing that moral systems are human constructs devoid of any intrinsic reality, let alone objective validity.

Once again, I insist on this difference : the scientific system only requires that we perceive correctly reality. The moral system requires that and another, huge step : passing a judgement of value, which is entirely different. I argue that you can’t compare the laws of physics that don’t require any subjective assessment and moral laws that do. Once again, the equivalent of a moral law isn’t a physical law, it’s a judgement of value about the physical universe.

Assuming a perfect knowledge of human reality, you can deduce from it, using the scientific method, and without a need for subjective judgement, how to cure cancer. You can’t, however, determine whether it’s a “good” thing to do so, according to moral laws, because this assessment is necessarily subjective. There are no way to measure objectively “goodness”. “Best” reflect our nature as moral agents with a need for interpersonal relationships" can’t be objectively defined, either.

And even the fact that a moral system should reflect our nature isn’t even obvious to me. The meaning of this sentence isn’t clear at all, in fact. It could be interpreted in many different ways. What “best” reflects this nature might very well be passing our genes at any cost. Or “vanquish our enemies, chase them before us, rob them of their wealth, see those dear to them bathed in tears, clasp to our bosom their wives and daughters”, aka kill, oppress, plunder and rape, if you want more “social interactions” than passing genes.To determine what’s “best” you need a metric. And you won’t be able to demonstrate that your metric of choice is objectively valid.

This entire argument reminds me of Socrates defense of Justice in Platos republic.

Socrates more or less breaks it all down to always being in ones own interest to act justly despite fear of punishment, or rewards from others.

While his argument is somewhat incomplete as he relies on just actions being mentally healthy and leading to ones own happiness but it fails to make a full connection.

While Id propose that singpurwallas approach here is probably most empirically correct.
https://www.iep.utm.edu/republic/#H4

We can prove our own well being relies on socialization and to some degree, unification.

Morals are simply guidelines to that end. While they may be imperfect and somewhat fluid they certainly have value. The elements that provide fluidity are only in the level of value placed upon each by the culture you are in contact with.

Therefore you certainly could ascribe an algorithm to an individual , based on that persons social contacts and the degree of interaction with each group or individual. That algorithm could determine what the best set of morals is for them and the weight of consideration that should be given to each. In essence you could objectively determine
morals that would be most beneficial to you in the here and now.

You could even determine the best morals overall , irrespective of culture, at least wrt other humans based on studying how the majorities of different groups prefer to be treated weighted against how you yourself need to act in order to achieve your " balanced soul"

I suspect this is basically what most people do subconsciously anyhow, but with limited perception. How others truly wish to be treated is not always clear and societal constructs can create situations that allow more selfish behavior than is really morally best. Rewarding or excusing selfish behaviors leads to imbalance in how one needs to act to satisfy themselves and be most happy.

So yes we can , in theory, rate our morals at least indirectly. Assuming we would each like to be as happy as possible, even if some are happy being sad.