Is morality ambiguous?

Case study: What’s wrong with defining a reasonable construct like self-actualization and then acting, starting with the least self-actualized, to increase it for everyone? We will continue doing this until we reach the Pareto frontier of self-actualization for society.

Difficulty: Sometimes you might have to hurt someone for a little while in order to make everyone, including the person you hurt, better off in the long run, for the long run.

Submit and discuss the tradeoffs involved in this and other proposals aimed at making everyone better off, in the context of right/wrong and good/evil.

I’d call it arbitrary rather than ambiguous, myself, and its arbitrariness isn’t complete - there are certain unambiguous differences between, for example, a system of morality that says it’s okay to kill children who fall ill versus one that frowns on such, namely that the population that treats childhood disease instead of making it a death sentence will tend to have a larger, more stable population total, for what that’s worth.

no

What does “self-actualization” even mean? The various dictionaries say it means something like “the achievement of one’s potential.” How can we know what our potential is? That’s like saying that we can predict all our possible futures. How can we do that, and how can we know which will be the best one for us? What does this have to do with morality?

For starters. morality is subjective to the society or group which sets out such societal guidelines. So, no, morality is not necessarily ambiguous. It is simply dependent upon what people want it to be. EG Christians have Christian morals, Hindus have different ones, the San bush people of the Kalahari have different ones, the Pintupis of Northern Territory of Australia have different ones, the Changos of Peru have different morals, etc.
As for forcing others to become “self-actualized”, it seems to me to be like trying to push the river, as I think that our species is constantly evolving for the better anyways. It is simply a slow process for beings such as us.

If someone who doesn’t have my consent tries to hurt me, I often respond by trying to hit them in the face. Because, well, that’s why we invented ‘hitting in the face’.

Likewise.

In the context of right/wrong and good/evil: if you don’t have my consent, I usually don’t advise you trying to hurt me. (Oh, and in other contexts: if you don’t have my consent, I usually don’t advise you trying to hurt me.)

It’s my impression that anyone’s potential is reached by climbing to high place. It is actualized by jumping. The Pareto Effect is reached when 80% of people survive the landing 20% of the time.

The OP’s proposal sounds like a variant of Utilitarianism in which the “greatest good” is defined as “self-actualization,” whatever the hell that is.

Don’t forget that “self-actualization” is not always a good thing. If someone has the potential to be a dictator, he’s not self-actualized until he actually becomes one. Should society help him realize his potential?

Well, as long as were discussing potential actualization and kinetic actualization, I feel it’s worth mentioning the importance of the omega-value of rotational actualization.

I would call this example not even arbitrary as much as ill-defined. Self-actualization is something that makes sense on a personal level, but I’m not sure how it can reasonably be applied to a society. That is, I can achieve my own self-actualization, whatever that might mean for me, but that quite likely means that I would need certain resources that might be helpful to others. That is, I think it’s likely that if enough people achieve self-actualization, it would be impossible for anyone else to. Technically, per the OP, that would be a Pareto Frontier, but that doesn’t necessarily mean that it is optimized.

But how do we optimize the allocation of resources without assigning values to different people’s self-actualization? For the sake of simplification, lets say we have just 2 people and there’s enough resources for only one to be self-actualized, or to adjust so that both fall short of it by some amount. We have a continuum of maxima here, but without assigning a value to both individuals’ self-actualizations, we have no way of determining what the true global maximum is. Worse, any values we assign to one self-actualization compared to the other is arbitrary and not objective. And this is just a two variable optimization problem, it’s not anywhere near the complexity of reality.

In reality, we might be able to make some shorthand valuations, but even then, it’s going to be arbitrary. Do we value the self-actualization of scientists or artists? Do we value leaders or teachers or labor? There’s no objective means, or even subjective way that enough of society could agree on to have any way of rationally assigning resources to achieve this.

Rather, what I would recommend is a more objective means which is favoring equality of opportunity, which, when broken down to it’s simplest form, ends up being a global maximization of options, or essentially a maximization of free-will. More options, and thus making choices that lead to the most options in subsequent or arbitrarily distant potential future states. This is something that can be truly globally maximized in an objective sense without subjective valuation.

The only subjectivity would be in if two states have identical options for a future arbitrary future state look ahead, but a simple heuristic of favoring fewer options for more people over more options for fewer people not only fits our intuitive moral understanding and likely subjective favorings, at least aside for self-interest if that particular state favors just oneself, but it can also be shown to demonstrably be the emergent general principle, so it only makes sense to apply it heuristically at the arbitrary future state.

That is, the moral argument of making everyone better off is, within the OP’s frame work, a subjective decision, but it is an emergent principle from an objective moralist perspective and doesn’t require one to result to appeals to right/wrong or good/evil, just simple maximization of moral branching factors.

I have to say your question was malformed in the sense that it itself was very ambiguous and outright confusing. Despite this, I think (if I’m not mistaken, and please correct me if so) that your question can be rephrased in the following way, in two questions:

 **"Is morality subjective? And what are the pros and cons of applying this moral system?"**

You ended with saying:

You have inadvertantly gave a subjective defintion to morality, i.e. that it consists only of making everyone better off. This contradicts your question a little.

“in the context of right/wrong and good/evil”. What other context is there to morality?

Words like ‘right’, ‘wrong’ and ‘good’ and ‘evil’ are 1) highly subjective, and 2) religious in nature. The only purely natural words I can see available here are ‘advantageous’ or ‘disadvantageous’ to ultimate survival, especially, and foremostly of humans. Which I don’t agree with as I’ll explain.

This is how I look at it.
I am a Christian. I can only see one objective morality (moral system, foundation) - an external one. In my case, this is a moral law giver, or definer who 1) has the right over us in order to be ableto bind us to it and 2) have the oversight needed to make one that isn’t subjective.

Because if humans contain morality and define it within itself, then it is bound to be subjective by its definition. Combining morality aand subjectivity, to me, I don’t know, it seems like a non-word. “subjective” and “morality” together don’t make sense. How can anyone say something is ‘right’ or ‘wrong’ if the next person says it’s fine or actually the opposite? That give zilch meaninig to ‘morality’.

Is morality simply defined by the majority? What they believe is right? That would be a cruel world, if wedid live in a world with those so-called morals.

In summary, morals ought to be objective if they are to be actually just, and if they are to actually have any meaning that doesn’t get redefined as the wind blows. It seems, though, that people beleive themselves morals ought to be subjective and therefore define their own as long as no one else is getting hurt.

Surely no one believes morality to be simply ‘live and let live’, do they? Or 'as long I don’t hurt anyone, then that makes it OK. I can list countless cases where no one but the perpetrator gets hurt, and most of the time no, and it would still (and should!) be considered wrong.

I can clarify any point made and am happy to answer any questions. :wink:

(P.S. Notorious for typos, please forgive them :slight_smile: )

What is right or wrong is derived from axioms. Not all of which are universal. Furthermore, their interpretation and derivatives are not consistent even if shared since humans aren’t perfectly rational and many humans are incapable of difficult thought.

Ambiguous? Somewhat. Definitely not universally objective.

Sounds like homework.

Sounds like petitio principii.

Thanks for all your responses. I plan to contribute a more thoughtful reply after I’ve digested them further.

For the time being, though, I wanted to contribute what I think qualifies as a reasonable axiomatic basis for determining what is right/wrong and good/evil. Namely, that pain hurts, and so needless suffering is wrong/evil and we should act to minimize it. On the other end of this spectrum, minimizing needless suffering corresponds to maximizing self-actualization.

Another quick point in response to resource allocation: I believe that we have an artificial scarcity of resources. Practically speaking there is so much free energy available to us that we can afford to accomplish a lot in terms of increasing happiness.

Morality is deeply personal and thus highly subjective. Your moral opinion and my moral opinion on any particular subject may be completely opposite, but I don’t think either of us would have a hard time explaining them. Morality is a personal value system. It is most certainly not arbitrary. Our morals are based on everything that we’ve been taught and shaped by everything we experience. Morality is fluid.

Ethics are different. They’re a means of codifying morals for a group in a way that is not fluid. Ethics are usually unambiguous. Laws are based on ethics. Medical care standards are based on ethics. Practitioners of both of these professions are bound by a rigid code of ethics that supersedes their morals. A college fraternity may have an ethical code that puts a member at odds with their morals and the law. Violating those fraternal ethics will probably result in being ostracized by the fraternity.

Jack Kevorkian is a textbook case of morals versus ethics. His morality lead him to assist patients in ending their lives. His actions resulted in the State of Michigan revoking his medical license and later finding him guilty of second degree murder and imprisoning him for eight years. Law is the highest level of ethics. Ethical law is a tautology but morality of law can be argued. In my opinion, that’s the only level at which morality can be considered ambiguous. As it applies to ethics.

Begging the question. What is your definition of self-actualization? How does one know when they’ve achieved it? How do we place people on a self-actualization scale and what do we do to help them climb it?