Suppose you meet a purely logical being who asks you why murder is considered wrong. How would you go about explaining it to him?
I’d mumble something about the golden rule or some such. Then murder him for not leaving me alone.
Human progress moves more quickly when we’re not engaged in mere day-to-day survival. If murder was casual and common, we’d have to spend, indeed waste, a lot of our time and effort in personal defense, with all long-term goals (if any) geared toward that end. Under the social contract that has evolved in animals like humans, we can afford to concentrate on other issues.
The logical being will have to accept as axiomatic that survival and progress of the human species is desirable, at least to humans.
Categorical imperative. Perhaps followed by the utility argument laid out above.
I couldn’t make a case for any moral code to a perfectly logical being. I can’t rigorously define “should” or isolate an ethics molecule. I’d have an easier time making a case for the existence of God, and we all know how well that goes!
We say things are ‘wrong’ as if it’s based on some thorough and consistent moral code, but murder is just something we don’t want to happen to us so we’ll call it ‘wrong’ whether there’s a moral basis or not. We made it a law, who cares if it’s right or wrong, we still don’t want to let people get away with murder because next time it could be us or ours.
I was going to say something very close to what Bryan Ekers said. It’s counterproductive at the species (or village or tribe etc) level.
Purely logical beings are horrible things. I’d avoid the problem by murdering it.
We have to establish first principles.
Does the purely logical being want to continue to exist? If not, does the purely logical being have any desires at all?
If the purely logical being has no desires, I’m stuck.
But if it does, then we’ve got the starting point for an axiom: a desire is that thing that you believe should happen, with “should” meaning “a thing that makes life better.”
Do you believe desires exist in other entities?
On what logical basis do you privilege the desires you have above the desires that others have? The fact that you feel yours and not theirs does not suffice: a desire is axiomatically that thing that should happen. The fact that on balance every entity is best positioned to fulfill her own desires is significant, but it doesn’t mean that other desires may logically be ignored, although they may be appropriately deprioritized according to the relative ability of the logical entity to fulfill them compared to the ability of the desirer to fulfill them.
A rough calculus is made when desires conflict: I desire to fall asleep right now, but if I do, I’ll really worry my wife who’ll wonder why I’m not home, and my desire to not freak out my wife is stronger than my desire to nap. A similar calculus may logically be made when considering other people’s desires.
The desire to continue existing is, in most entities, paramount: almost all other desires depend on this one (with a few rare exceptions such as suicidal entities or entities dying to protect those they love). When weighing conflicting desires, it’s logical to weigh an entity’s desire to continue existence higher than almost any other desire.
Murder bad.
I’d point my revolver at his forehead and ask whether I should pull the trigger.
Murdering people creates more murder in the society, which increases the likelihood that you are going to be murdered. Ergo, if your continued existence is a value to you, you shouldn’t murder others.
Well, spending all our time and and energy on personal defense is but one possible approach, it belatedly occurred to me. Another evolutionary response could have been rapid breeding. If humans could multiply as quickly as rabbits, then individuals could be randomly killed (admittedly, rabbits tend to get killed by predators and not each other) and still be able to pass on genes.
This might be my favorite post of all time.
Simple: shared values.
As sentient beings who value our own lives, we communally agree that sentient life is highly valued and mustn’t be taken without grave cause.
Not to mention the premature loss of people who had the potential to contribute to society.
I’d stay away from that one. Too easy to lead to justify killing the mentally handicapped.
There’s several logical arguments, but all of them rely on the base assumption that “morality” itself is a logical construct. In a randomly generated, transitory world, nothing really matters one way or the other since there is nothing that can absolutely declare any view to be correct or incorrect. We just view the wars, murder, and cannibalism between bacteria to all be “nature” and don’t think anything about it since who gives a flying whoopsie about bacteria? So even if there was some greater being which could declare right and wrong, from its standpoint, who cares?
So, to the extent that we are aware, morality is just a human construct that has no meaning beyond what humanity invests into it. And I think it’s reasonable to say that for much of human morality there was no large amount of reasoning that went into it. Without at least some base axioms, I don’t think it’s possible to make any sort of logical argument. And without a shared set of axioms I would just be giving my opinion, not humanity’s opinion, which makes it no longer an answer as to what “morality” has to say on the subject.
But if you have an opinion, and I have the same opinion, and by a remarkable coincidence the let’s-not-murder-each-other truce we’d both agree on happens to also be enshrined by our society in general and each opinion-against-getting-murdered person in particular, then the modus vivendi quickly hits “humanity’s opinion” status.
It’s less a shared starting axiom, and more a shared conclusion: I don’t particularly care why you’re willing to go all live-and-let-live with me, I just go with it – and you and I and pretty much anyone else in earshot, we’re all on standby to pick up our weapons and foil anyone who declares himself for the opposite. (Which is why, when asked, they dart their eyes left and right and declare themselves all for live-and-let-live too.)
But that’s a vote, not a logical argument.
“Because it’s popular.”
But you voted for it for what struck you as a logical reason – not just because it was popular. (I mean, I could be wrong about that, and you’re free to correct me, but I’m guessing you can do better than “Because it’s popular.”) At a minimum, I can assure you that I’m voting against institutionalized murder for (a) what strikes me as a logical reason, and for (b) something other than “Because it’s popular.”
Now, your reason might not be my reason, but we don’t need a shared reason; we just need a shared result, voting the same way even without the same axiom.