Yes: by external pressures. So again, how do I decide what the moral thing to do is? We can illustrate it by a simple situation.
There is a man in my house; I can see that he has a gun. I have an opportunity to hurt or kill him (the action will at least hurt; my finite ability to fortell the future indicates that my action my kill him, as well). Please describe how evolution will tell me, a moral agent, what to do.
While you’re working that out, ask yourself:
If I am a murderer, and I kill a bunch of people, and I go to jail for a life sentence, was I right to do so (after all, didn’t I survive?)? If I have access to a doomsday machine, and I have selected five hundred humans I cherish very much, may I kill everyone else but us?
Perhaps you feel these are terribly silly questions. I, for one, find evolutionary morals to be silly. So while I humor you, please humor me, as I am a little confused. As far as I can tell, evolution’s only criteria is “doesn’t kill everyone”. I am fairly confident that any attention to detail will indicate that morality goes at least a hair beyond that.
You are confusing murder with killing. They are not equivalent, either morally or legally. Many, perhaps most, ethical systems say it is permissible to kill in certain circumstances, such as self defense. A police officer is trained tom evaluate situations, and only to act if necessary. (Not that it always works out, of course.) A sociopath does not have an ethical compunction against killing.
It is certainly possible for there to be a consistent ethical system that forbids stealing in general but permits it to save the life of a child. That is why simplistic religious moral systems, based on a few words in an old book, are so useless.
First of all, if he is about to pose an imminent threat to you or has the likelyhood of posing an imminent threat, its in your best interest to incapitate him. This much is obvious. However, if there was NO imminent threat, it gets a bit more complicated. While it may very well be that you might put yourself in more short term danger by confronting him, there is a social contract that exists which puts a duty upon you to confront him as well. As a society, we NEED to make it so that any act that acts to benifit an individual over the needs of a society needs to be heavily selected against. Everybody NEEDS to know that trying to rob a house at night is a downright risky and dangerous undertaking. Thus, it is moral to confront the attacker even at risk of harm to your life.
First of all, I fail to see how a life in prison without possibility of procreation counts as survival. Again, I point out that the number of years lived is immaterial, what counts is the long term propagation of genetic information. At best, you’ve reduced chances of further propagation to nil, at worst, you did this before having children and your in effect merely a walking corpse. But apart from that, murder as a whole is clearly detrimental to society so of course its not a moral act.
As for the doomsday machine case, its an absurd and contrived question. If you contracted a rare disese which meant that the more you loved someone, the sicker they got, would it be right to love people? I mean, after all, God said love thy neighbour right?
Apart from obvious flaws such as the fact that 500 people probably won’t have enough genetic diversity or resources to get a decent start with, why would you ever wan’t to do such a thing?
I think your still holding onto the naive notion that what I’m proposing is doing whatever you can to live which is not what I am saying at all. Whether you live or die is immaterial to whether any trace of you remains in 10,000 years time.
"Do not unto others what you would not others do unto you."
i’ve been following this long before i have even heard of this quote, wherever its origin may be. even so, the smallest child would know not to punch another if he is quickly reprimanded or punched back. there is no need to bring religion into this.
in peace, you would not kill another because you would not want to be killed in turn. in war, you will kill another because you would have taken up the cause to protect your country, you would have given your life for your country as your enemy would have done the same for his; therefore, if your orders were to kill you would do so as you would expect to be killed in return. failure to do so who have been a betrayal to your country, a betrayal to what you yourself have committed to do, the same vice versa.
in short, you would not betray your command as you would not expect your enemy to betray his.
this conforms to the rule as well - say you enjoy tentacle sex as you would others do too, and if that was a common view in your village then everyone would be in hentaiverse and do the slow tentacled dance. however, if you discover that your views are not popular then you would take care not to force your tentacles on anyone, otherwise they might force things on you too.
you don’t expect anything from the worms, but as you hope others would rescue worms in their turn you begin leading by example.
not to mention wasps fighting each other would not use their deadly sting and that most lion fights are not lethal.
i would have the murderer killed by the state just as i would expect the same if i did something illegal that warrant such an action, the same goes for the law enforcement officer.
hope i am making sense.
But this isn’t really doing unto you now. It’s merely “Do what you think others (including yourself) should do”. Seeing as this is the base definition of morals, what you’ve really argued is that its moral to be moral.
You still haven’t enunciated exactly WHY you “hope” others would rescue a worm.
So raping anti-abortionist women would essentially be the most moral thing a person could do?
Society defines murder, and its scope is ambiguous. Right now, society doesn’t wholly agree with your interpretation of what to do to the man in my house. So I don’t think, with your evolutionary standard, you can use this kind of support.
These are the kinds of things that we think about in philosophy, on the off chance that very powerful men exist that control weapons of mass destruction. One might also say, “Well, natural disasters wipe out large numbers of people and we’re still around, so where’s the harm?”
Yeah, well, we’re not talking about God, we’re talking about evolution, and as far as I can tell, whether I kill the rest of the planet (apart from the chosen 500) or not we’re still surviving. I’m using that “moral reasoning” thing you were talking about, asking myself the question, “Would this kill us? No. Will it promote our survival? Sure: same amount of resources, less people. So let’s get fucking.”
So it is not the survival of the species, but survival of me that is moral? Make up your mind! Do I have a duty to enforce the social contract by placing myself at risk, or do I have the moral commandment to spread my genes far and wide?
To be clear, by “survival of me” I do mean the spread of my genes. I thought context would make it clear, but I just want to be sure we don’t go over that again.
But if ethics were truly universal, then your enemy would not kill either, and you would not be in the dilemma. In fact you could probably prove that any meme or gene for pacifism is unstable, since any mutation (in the loose sense) allowing someone tp kill would be extremely advantageous in a society of pacifists.
[stretchier]
because you hope to live in a world where others would rescue worms, you would… do so yourself first? because… you sport a seizure whenever you see drowning worms? so… you would spare others the sight as you hope others will spare you? yeah, that’s it.
[/stretchier]
ah forget it. i have no idea why anyone would want to rescue worms, nor is it my point to explain the why. just trying to illustrate how the Rule could be followed somewhat even in wierd scenarios.
Voyager
society of pacifists - where people have chosen to follow the Rule not to murder because they would not have murder done unto them. that does not preclude killing a criminal in a system of justice because the same pacifists would appreciate law and order, that is, if they were to kill they will expect to be killed. the same goes for the mutant.
all of them are able to follow the same Rule resulting in a society of the majority.
Hmm - pacifists in favor of the death penalty. An interesting concept.
You are also confusing killing with murder. Murder basically means you will not kill one of us. That seems like a fairly universal ethical rule, since not having it would seem to lead to unstable societies. Ethics against killing is a totally different story - and your example is of a society without a rule against killing, only murder.
I’m not really a true atheist, but I’m allergic to all religions, and tend to eschew them. e-e-e-e-ehshew! e-e-e-e-shew! (See, even typing the word ‘religion’ makes me sneeze.)
So, where do I get my ethics? Since I am lazier than the average human, I like to follow the path of least resistance, to take ‘the easy way out.’ Thus:
It is easier NOT to lie, cheat or steal, so I don’t.
It is easier NOT to hurt or injure my fellow man, so I don’t.
It is easier NOT to commit adultery, so I don’t.
etc., etc.
For me, the easiest ways are the best ways; and so easy to follow, too!
See, the thing about that method is that you have to go an extra step. You’ve done pleasure/pain as your good/bad, but you still need to extend to “good/bad for whom?” For you, for others, for you and others, for which others, and what combination and in what balance?
Additionally, I think there is a place for rule-based consequentialism rather than merely situational ethics in this framework. After all, our knowledge is generally limited and we can’t forsee all the consequences of any given action. So in practice, following a set of ethical rules designed to maximize positive consequences while minimizing the risk of unintended negative ones might be much more efficient than trying to predict the unintended consequences of your actions. Consequentialism does not, of itself, require purely situational ethics, because it is at least possible that universal rules might meet the requirement of maximizing good.
As for the general question of the OP…well, my present thinking (and I’ve been all over the map on this one) is that since the world exists objectively, then one ought to be able to determine objectively whether an action is right or wrong. After all, if I could reason perfectly and was completely informed then I would easily be able to determine the one best action, and all the others would therefore be the wrong actions.
But my former thinking on this was that moral statements lack a truth value because they are not declaratory statements. A statement that “stealing is wrong” isn’t really a statement about the nature of stealing, rather it is a moral exhortation “hey you, don’t steal!” When you say some things are right and others wrong, what you are really saying is “do these things, don’t do those things.” In that case, it’s an imperative statement, and so has no truth value. As such, we would have moral relativism but not be burdened by inane postmodern nonsense about metaphysical relativism.
Oh, as for evolutionary ethics…well, they seem to put way too much importance on some mythical notion of “species”, as if right and wrong was done not to individuals but to some collective body. That seems an awful lot like what cultural conservatives argue when they claim certain things (gambling, pornography) have a negative effect on “society”, even though they have a hard time pointing to particular individuals who were wronged, unjustly, by these things alone.
IMHO, the ethical value of an action needs to be measured in how it affects individuals, not societies, nor cultures, nor species. It’s individuals who have hopes, fears, ambitions, aspirations, and dreams, and so it’s individuals who are harmed by having those goals frustrated, or benefitted by having them furthered. Even if you destroy every human on the planet with a Doomsday Machine, that action would be wrong (assuming it is) because of the individuals whose lives were ended, not because it ended the lives of a species. Once evolution produced a creature capable of reason, capable of making the choice not to sacrifice itself for the herd for example, evolution became irrelevant to ethics.
let’s rewind this a bit. you were replying to my claiming a soldier would not betray his command to kill as he would not expect his enemy to betray theirs:
not knowing how to tackle this directly, as i have already explained how a soldier can follow the same Golden Rule in peace and in war, i went on to the second part of your post.
here is where i went offtrack. i did not describe the soldier in my first example as a pacifist, merely as someone who could decide to kill or not under the same Rule in different circumstances. in my hurry to describe how a society with differing opinions on murder can subscribe to the same Rule, i have taken the ‘society of pacifists’ as a ‘society of people who would not murder’. my mistake.
:smack:
so allow me to jump back before my previous post and tackle what i skipped before.
a defender following the Rule would react as my soldier did up above; an aggressor following the same Rule might reason: my country needs me to attack (the defender), failing which there will be continued suffering and strife (or whatever). i would not refrain from facing my duties for the greater good of the country as i would not expect my enemy to abandon his’ in the same situation; so i will kill where i would normally not.
am i making sense? or am i going around in circles? someone shoot me already.
Never said it was easy. I see no reason to just consider my own pleasure/pain, or just the pleasure/pain of a certain race/species/nationality/gender/sexual orientation/anything else, so what it comes down to is trying to ascertain how much pleasure/pain will be caused to how many individuals and hoping the bottom line ends up in the black.
Yes. I actually agree with everything you say, but you haven’t bought the premise.
The “thou shalt not murder” rule is pretty universal, but that can be easily explained by social and genetic factors. (Even chimps don’t murder, but they kill.) Say the “thou shalt not kill” meme was as engrained as the “thou shalt not flap your arms and fly” one. If there were universal, say god-given, moral rules, this might be so. In that case all your examples don’t work, since there is no reason for me to kill the enemy soldier since I’d know he’d never kill me. We might rough it up, call each other names, etc., but no killing.
Perhaps pacifist is too weak a term. In a society that never kills, a killer, or set of killers, would have a tremendous advantage.
The point, which I suppose I was not clear enough about, was that universal ethics, not “relative” or time and culture based, would result in a very different world.
The problem with that is that you have to weight it somehow. If you don’t weight your own pleasure/pain higher, you might end up being forced to conclude that it’s morally right for you to sacrifice your own life to save two lives. That’s something you might do for family, but probably would think outrageous to do for two random strangers. I think we all end up making our own valuations and weighting somewhat naturally, but it’s difficult to formulate.
You also have to distinguish between morally obligatory actions, morally permissible actions, and morally forbidden actions. Is total self-sacrifice obligatory if it produces a greater volume of pleasure for other individuals, or is it merely permissible under that standard?
You’d have to make all sorts of valuations and predictions and balance them all constantly. I would think, therefore, that it becomes necessary to adopt a rule-based ethic for two reasons. The first is merely practical, that most people simply cannot expend the effort to constantly balance those equations at every turn, nor to gather enough data to correctly balance the sides. Secondly, if you lack a fundamental guiding principle beneath the level of the distinct decisions, then you would probably end up making arbitrary and inconsistent decisions. Differing decisions at different times/places/situations can only be reconciled to each other by a binding ideology that underlies them, otherwise your pattern would be riddled with contradiction and therefore not a correct ethical theory.