A possible reason, yes - but not a necessary one. I for one believe ethical obligations are based on external moral principals and admire Kant’s Rule of Universality.
What makes people tick fascinates me too.
Despite its denomination, consequentialism has a lot to do with intent and motives.
If there were an official group of Kantian moral absolutists and if I managed to get accepted, I think I would make the most consequentialist of them all.
If I understand correctly, it is some sort of moral pragmatism, am I right?
Consequentialism is the least bad of the choices offered, but it apparently is what I actually use on a day to day basis.
What are the effects of my actions and choices?
How are others affected by my actions and choices?
How can I effectuate the best outcome and in what manner?
It seems a basic definition of modal operations for a scientific-minded person.
But, I wanted to be Spock when I grew up so I may be a bit too analytical about this kind of thing.
1 and 2 go together and they’re my choice(s). You have malam per se regardless.
If the Greeks surmised there was a Southern Continent and they were right, I guess we can take a comparatively more educated guess and express our opinions on what the Universe is like.
Unlike my previous polls, this one seems to work fine from the very start.
Unless you are the store owner who was barely making enough to get by in the first place. Because people stole from him to feed their familly, he could no longer pay the rent, lost his business and now has no way to feed his family. Unless he comes to your house and steals the bread back from you.
I’m not liking that system.
I would pick Consequentialism, but I think it’s kind of a cop-out. If I’m not ascribing a moral value to the consequence of an action, then why does it matter what the consequence is? This system only works if you assign a moral value to the consequence (using some other system?), and then use that conclusion upon which to judge the right/wrong of the action.
I voted for relativism, though I think that it has a whole slew of practical problems. But, basically, I think that my opinion about what is right or wrong is correct, and anyone who disagrees is incorrect (unless they can convince me otherwise, and then we’re back to me being right!). The system I use to determine right and wrong is, as marshmallow suggests, mainly based on my feelings and intuition, bolstered by what I believe is an informed, thoughtful, and open-minded position about the different ways in which people interact with each other. I’m kind of a weird mix of absolutism and relativism, but, in life, it seems that most other people are the same way, depending on circumstance.
I voted other because I essentially believe in both an absolute and a relative morality and, while I am a theist myself, it is equally applicable from an atheist perspective as well. I’ve explained it in other threads related to morality, so I’ll try to give a cliff’s notes version.
We can model morality as a series of choices building a decision tree, this is akin to a game tree we might build for a game like tic-tac-toe, chess, go, etc. With games, we have a desired end state, and if the state space is small enough, like in tic-tac-toe, we can actually calculate from a given state to the end states and make a “perfect move”. As the state spaces grow larger, in games like chess or go, it’s theoretically possible to calculate a perfect move given a sufficiently powerful computer, but at least for now, it’s not practical, so instead we develop heuristics to take our best guess at the optimal move.
I relate this two morality except for two key differences. One, there’s no foreseeable desireable achieveable end state (ie, the ones we can achieve, like self-annihilation, are obviously bad), and two, the ultimate end state isn’t necessarily objective. That all said, assuming a God capable of observing all space-time, then he would have sufficient knowledge and power to calculate an ideal choice for a particular goal and could presumably also provide reasonable heuristics approximating those goals, which we receive as divinely provided moral rules. Without assuming the existence of God, we can define our own goal states and determine better and better heuristics toward obtaining it ourselves, the only difference is that it’s bottom up rather than top down.
And there’s necessarily a relativistic aspect to morality, just as there is to a sufficiently complex game. Sure, there are some rules that always or almost always apply and you’ll get little or no dispute that it’s a simple and solid moral rule, not murdering is a pretty straightforward example. At the same time though, we can all come up with morally ambiguous situations where simple rules aren’t going to work, and we need a greater degree of precision. If two people disagree about what the correct moral choice is, but both are moral individuals, there’s no objective way to say who is actually correct without actually working out the whole chain of future events.
If it’s non-orientable, is it occidentable? Perhaps Western imperialist expansion is more significant than we knew.
I selected consequentialism, while I also agree with the point Blaster Master has outlined, regarding omniscience.
In enough aspects, I think it more closely mirrors the reality we live in, and especially comes into play with law and judgement (not just in examples of authority, but even in social contexts).
No, but if this expansion is nous-dependent, then it may be occipitable though.
How did you vote in the poll?
NM, answer doesn’t really make sense now that threads have been merged
I apologize, it’s been brought to my attention that the two polls were not about the same thing, so I have split them off again and I retract my last note in this topic.
Of course. Posts 35-38 do not belong here and they should be removed.
The majority of votes opt for consequentialism, a choice that, in practice, leaves room to a lot of unsolved problems.
If there’s a moral system without troubling thought experiments and logic bombs, I missed it.
To me it’s like trying to assemble a huge list of ways to make a good movie. But what makes a good movie is subjective and is barely related to logic. You could follow the list perfectly and make a horrible movie. Or you could flout them all and still make a good one. It’s a case by case basis. I guess that makes me a moral skeptic.
Moral relativism gets a bad rap, but I think it makes good points. That convo always seems to go to extreme examples like slavery or some horrible tribal practice. But individuals are constrained by the systems around them, factually speaking. What’s the moral way to act in an amoral job, like if you’re a CEO or work at the Pentagon or something. Just quit and let some other cog move up? Try to change it from within? But that’s a common justification for staying pat. It’s understandable why people wouldn’t want to buck the system and take the heat. It’s a conundrum.
Speaking of unpleasant thought experiments, I’ve always been rather fond of the human extinction argument as the best way to end human suffering. Most moral systems place more value on ending suffering than creating happiness. Of course we in the present don’t want to die, but that’s our bias. And we’d be doing a big favor to the future population, which most likely outnumbers us by a huge margin (especially if we ever colonize the galaxy, it could be trillions of humans).