What moral system appeals to you most?

Consequentialism is the least bad system offered. None are that useful for deciding big questions, since one can find inconsistencies in the simplest Mickey Mouse thought experiments used to justify their own existence. Now try scaling that up to actual real world issues. Great, everyone’s a consequentialist. Is interventionist war good or bad? Always, never, sometimes? If it’s gray, how do you decide? You can’t predict the future. The fog of war says good friggin’ luck even understanding the present. And we can pull a Zhou Enlai and say it’s still too early to decide about the French revolution.

I think moral frameworks are reverse-engineered so as to agree with our pre-existing subjective moral intuitions. Because everyone dips their toe in each one they are practically useful tools for creatively attacking enemies (X is always wrong! or He should have known Y would happen if he did X!) or defending allies (suddenly everyone’s a relativist constrained by their unique situation and they couldn’t predict anything).

And there aren’t surprises, or discoveries. No one sits down and says “wow, I used to think X was immoral, but after becoming a negative utilitarian I guess it’s peaches and cream.” Maybe a robot would.

I’m not sure. I’m beginning to suspect long-term self-interest is synonymous with “good” and short-term self-interest is synonymous with “bad”.

No, concern is the response to a problem.
Not the same.

Thank you. I’ve been under snow for a couple of days and been unable to respond. I’ll catch up. I appreciate your kindness.

A possible reason, yes - but not a necessary one. I for one believe ethical obligations are based on external moral principals and admire Kant’s Rule of Universality.

What makes people tick fascinates me too.

Despite its denomination, consequentialism has a lot to do with intent and motives.

If there were an official group of Kantian moral absolutists and if I managed to get accepted, I think I would make the most consequentialist of them all.

If I understand correctly, it is some sort of moral pragmatism, am I right?

Consequentialism is the least bad of the choices offered, but it apparently is what I actually use on a day to day basis.
What are the effects of my actions and choices?
How are others affected by my actions and choices?
How can I effectuate the best outcome and in what manner?
It seems a basic definition of modal operations for a scientific-minded person.

But, I wanted to be Spock when I grew up so I may be a bit too analytical about this kind of thing.

1 and 2 go together and they’re my choice(s). You have malam per se regardless.

If the Greeks surmised there was a Southern Continent and they were right, I guess we can take a comparatively more educated guess and express our opinions on what the Universe is like.

Unlike my previous polls, this one seems to work fine from the very start.

Unless you are the store owner who was barely making enough to get by in the first place. Because people stole from him to feed their familly, he could no longer pay the rent, lost his business and now has no way to feed his family. Unless he comes to your house and steals the bread back from you.

I’m not liking that system.

I would pick Consequentialism, but I think it’s kind of a cop-out. If I’m not ascribing a moral value to the consequence of an action, then why does it matter what the consequence is? This system only works if you assign a moral value to the consequence (using some other system?), and then use that conclusion upon which to judge the right/wrong of the action.

I voted for relativism, though I think that it has a whole slew of practical problems. But, basically, I think that my opinion about what is right or wrong is correct, and anyone who disagrees is incorrect (unless they can convince me otherwise, and then we’re back to me being right!). The system I use to determine right and wrong is, as marshmallow suggests, mainly based on my feelings and intuition, bolstered by what I believe is an informed, thoughtful, and open-minded position about the different ways in which people interact with each other. I’m kind of a weird mix of absolutism and relativism, but, in life, it seems that most other people are the same way, depending on circumstance.

I voted other because I essentially believe in both an absolute and a relative morality and, while I am a theist myself, it is equally applicable from an atheist perspective as well. I’ve explained it in other threads related to morality, so I’ll try to give a cliff’s notes version.

We can model morality as a series of choices building a decision tree, this is akin to a game tree we might build for a game like tic-tac-toe, chess, go, etc. With games, we have a desired end state, and if the state space is small enough, like in tic-tac-toe, we can actually calculate from a given state to the end states and make a “perfect move”. As the state spaces grow larger, in games like chess or go, it’s theoretically possible to calculate a perfect move given a sufficiently powerful computer, but at least for now, it’s not practical, so instead we develop heuristics to take our best guess at the optimal move.

I relate this two morality except for two key differences. One, there’s no foreseeable desireable achieveable end state (ie, the ones we can achieve, like self-annihilation, are obviously bad), and two, the ultimate end state isn’t necessarily objective. That all said, assuming a God capable of observing all space-time, then he would have sufficient knowledge and power to calculate an ideal choice for a particular goal and could presumably also provide reasonable heuristics approximating those goals, which we receive as divinely provided moral rules. Without assuming the existence of God, we can define our own goal states and determine better and better heuristics toward obtaining it ourselves, the only difference is that it’s bottom up rather than top down.

And there’s necessarily a relativistic aspect to morality, just as there is to a sufficiently complex game. Sure, there are some rules that always or almost always apply and you’ll get little or no dispute that it’s a simple and solid moral rule, not murdering is a pretty straightforward example. At the same time though, we can all come up with morally ambiguous situations where simple rules aren’t going to work, and we need a greater degree of precision. If two people disagree about what the correct moral choice is, but both are moral individuals, there’s no objective way to say who is actually correct without actually working out the whole chain of future events.

If it’s non-orientable, is it occidentable? Perhaps Western imperialist expansion is more significant than we knew.

I selected consequentialism, while I also agree with the point Blaster Master has outlined, regarding omniscience.

In enough aspects, I think it more closely mirrors the reality we live in, and especially comes into play with law and judgement (not just in examples of authority, but even in social contexts).

No, but if this expansion is nous-dependent, then it may be occipitable though.

How did you vote in the poll?

Note retracted.

NM, answer doesn’t really make sense now that threads have been merged