What would those social reasons be? My apologies if you already stated them; could you explain again? And remember to look at the worst case – if there are social reasons for me not killing the people I love, that’s not going to apply to my killing people I hate).
kputt’s insightful posts call to mind the guy who answers his cell phone during the symphony. I was hoping glaring at him and telling him to hush would learn him; apparently not. Next up: DNFTK.
As an atheist, I object to this – there are plenty of people who believe in God for reasons other than fear, I think. Especially people in emotional, ecstatic religious traditions. However, we probably should start a new thread if we want to discuss this point any further.
I would never do that in the first place. I don’t need any know it all trying to tell me to shut up when in fact he should be the one to shut up. Please grow up and stop judging people, especially those telling it like it is without sugar coating it first like most people like to do.
ok, the reasons you don’t kill people you love are obvious.
one reason you don’t kill strangers is because of the complexity of human interaction. you don’t know what that person has to offer society or you specifically, and you can never know whether his death will outweigh the benefits of killing him. suppose you know a man you went to school with, and you hate him for being mean to you. and if you can “perfect murder” him, you will gain an advantage at say a job or some such. suppose you do that, and you never know that he was a very gifted educator who would’ve taught your daughter in elementary school. as a result, your daughter gets a second-rate teacher, grows up running away every month, has a kid when she’s 16, and ends up living with you on and off till she’s 35, with a new job every month. i mean, it could happen. as could any number of things that directly or indirectly benefit you. so those possibilties may stay your hand.
it is also conceivable that you may hate a relative who’s giving you stuff in his will, and can “perfect murder” him, but his value to society is such that you benefit more from having him alive.
i call these social reasons. they are not reasons society views the perfect murder as wrong. society can have no such view. there is no such thing as a perfect murder in society, because society never knows about it. but they are reasons based on your interaction with society that you might benefit more from having a person alive than whatever material amount you stand to gain by committing the perfect murder.
due to the huge web of interaction that is human society, you never know who will be hurt or harmed by the loss of a particular member.
if someone can gain more than they’d lose by committing the “perfect murder”, i suspect that person will do it. if they uphold a moral code that prevents any sort of murder, they still have to get by integrity and conscience in order for the good to outweigh the bad. indeed, there are cases when the death of someone in the mind of society outweighs what society stands to gain from his life. those people are executed, or killed in self-defense.
If the person has a non-social-contract moral code, then they fall outside of my question: I completely understand why (for example) someone with a rights-based moral code won’t commit murder. Let’s dispense with that kind of person. We’re only looking here at social contract theorists.
Sure, I might not know the potential harm that would come out of murdering someone. But that’s not a situation unique to murder: I don’t know the potential harm that would come out of countless decisions I make every day. I operate on my best guesses as to the harm or benefit that results from such actions.
Sure, I might kill some guy, and if I hadn’t killed him, he might have been my daughter’s most profound teacher. On the other hand, maybe he would have been the guy that kidnapped and killed my daughter, and by murdering him, I’ll save my daughter’s life.
Which of these completely unpredictable possibilities should I take into consideration?
Answer: neither, unless i know that the fellow is likely to teach (or kidnap) my daughter). And surely you acknowledge that there are plenty of cases where I’d have no reason to predict either of these.
It seems to me that the “social reasons” for not killing someone all fall into one of two camps:
Unpredictable events, which I can’t take into consideration in any decision I might make; or
Reasons why I wouldn’t kill specific people (maybe I think grandpa, a mean old bastard but a great philanthropist, serves the world better alive than dead). Remember that i asked you to address a worst-case scenario: if a moral system only prevents killing grandpa when the would-be murderer considers him to be a boon to society, it’s not a very strong moral system.
Again, I’m not arguing that murder is okay. I’m arguing that social contract theory, or indeed any enlightened self-interest theory that doesn’t rely on an omniscient judge, is inadequate as a moral system, inasmuch as it provides no restrictions against committing perfect crimes (i.e., crimes in which nobody capable of retaliation realizes a crime was committed).
Yet, a society that engages the social contract but still has “unbelievers” commiting the perfect murder is akin to the society that believes in the ten commandments yet has the unbelievers commiting coveting, No?
The unbelievers in the coveting scenario choosing to follow all of the rules that the commandments forward- due to societal pressure, except the ones they can get away with. The true believers will police themselves out of self-interest.
The “unbeleivers” in the perfect murder scenario will commit the murder out of self-interest because they think they will not get caught and are not, by definition, believers and will not police themselves.
kputt- If you read through the thread you will find that I wanted to challenge peoples reasons for why it is wrong, not whether or not is is* wrong.
We all know its wrong, now would you mind telling me why you think its wrong?
My only way to understand the world is through my own experiences. Here are some of my experiences:
I don’t like to be hurt.
I fear death.
I watch other people around me, and see that they behave in ways similar to how I behave.
People who are murdered behave in ways I would behave if I were experiencing pain.
I have lots of things i want to do in life that I’d be prevented from doing if I were murdered.
The conclusions I draw from these experiences?
Murdering me would hurt me and prevent me from doing many things i want to do. Murder would therefore be an extremely negative phenomenon in my existence.
Since other people behave in ways analogous to me, they’re likely to experience the world in ways similar to how I experience the world.
Other people would likely experience their own murder as an extremely negative phenomenon.
Now, my morality is based around extending my own experiences. I’d really not want to be murdered, and I want people not to murder me because I’d find it negative. Other people don’t want to be murdered, and I therefore want people not to murder them because they’d find it negative in a way similar to how I’d find it negative.
In other words, I’m projecting :D. If I don’t want it done to me, I don’t want it done to others, either. To order my priorities differently would be unjust.
A few wrinkles in this:
It’s a little more complicated than that. Really, I’m interested in not frustrating the fulfillment of desires, since in my experience the frustration of the fulfillment of desires is a crappy thing. If someone wants to get killed, hey, more power to them.
Because I’m interested in the fulfillment of desires, I don’t limit my moral system to humans, and I don’t extend it to every entity with human DNA. Bonobos have desires which often seem analogous to mine, and I therefore try not to thwart them; fetuses have no desires that I can recognize, so I’m not so worried about not thwarting them.
This is all prima facie – in other words, all things being equal, I’ll try not to thwart desires. However, sometimes desires come into conflict; in that case, I’ve got to figure out which set I’m going to try not to thwart.
Finally, although I don’t have a good moral reason for it, I give more weight toward the desires held by entities I’m close to than to those of entities I’m not close to. My own desires and those of my wife and closest friends take priority for me, for example; the desires of a pig have less priority to me. However, a pig’s strong desire not to be tortured to death would take priority over a close friend’s desire to shoot a pig in Reno just to watch him die.
That, at any rate, is a roughly-sketched outline of an alternative to social contract theory. Note that my morality doesn’t allow for perfect murders, and it provides protection to infants and most of the severely mentally disabled.
no event is wholly predictable, but you can take into account certain more likely outcomes. if your faith in people is rather high, for example, you might expect that a person is more likely to contribute to society than to take from it.
a few nitpicks here. first of all, i’m not sure what you mean by “enlightened” self-interest. it is my opinion that no moral system exists but for self-interest (why adhere to a system that doesn’t interest you, for example?). also, what makes a moral system more or less adequate? you gave a list of what you thought were “holes” in certain systems earlier. does the fact that a moral system does not prevent something you consider wrong mean that it is inadequate? to me, it just means it’s not your moral system.
Remember to take a worst-case scenario, not the one most favorable to your argument. Let’s not look at whether social-contract-theory would prevent us from killing someone that we think is going to contribute to society, or whether social contract theory prevents Pollyanna from committing murder – surely, it is necessary, but not sufficient, for a moral system to do these things. However, it must also prevent us from killing a mean old lady who lives in a nursing home and snaps at the orderlies and really provides little joy to anyone’s life. And it must prevent Oscar the Grouch from committing murder. If the moral system relies on the would-be murderer’s high opinion of the would-be victim, it’s on shaky ground.
re: Self-interest: when I use the term, I’m talking about making decisions that immediately make me happy, without necessary regard to altruism or empathy. Enlightened self-interest theories do not rely on people behaving well because they empathize with one another; they suggest you can reach a good morality without empathy or altruism (although generally they don’t forbid such motives, they don’t rely on them).
In terms of what makes a moral system adequate, I know of several criteria:
The moral system gives you a sense of what to do in a broad variety of cases. A moral system that consists entirely of the maxim, “Do not hurl papayas at men in pink suits!” may be good as far as it goes, but is not particularly useful.
The moral system is not self-contradictory. A moral system that says, “Do not kill anyone! Kill the heretics!” is a little difficult to follow.
The moral system is just – that is, it suggests relevantly similar actions in relevantly similar circumstances. A moral system which says, “Black men should be put in prison for drug use, but white men should be slapped with community service for drug use” is unjust, unless the moral system explains why skin color is morally relevant (or why the drug use is morally different).
The moral system does not rely on factually incorrect premises. A moral system that says, “Do not feed papayas to people, because papayas are the egg-cases of malevolent alien beings who want to invade earth” is problematic.
The moral system’s conclusion derive logically from the moral system’s premises. A system that says, “People do not like to be hurt. Infidels are people. Do unto others as you would have them do unto you. Therefore, hurt infidels!” has logical problems.
The moral system reflects our intuitive judgements about morality. A moral system which leads you to conclude that killing mildly retarded children is correct, flies in the face of intuitive morality, and therefore has serious problems.
There are other criteria, but you get the idea. Note that number 5 is probably the most controversial: many philosophers think that if a system meets all other criteria for a good moral system, any contradiction it has with your intuitions should be decided in favor of the system. I, and other folks, disagree: I think that intuitions about moral judgements are powerful and deserve attention. Often, but not always, they point to a serious flaw in the system.
In this case, the fact that perfect murders and killing one’s own children are not forbidden by a social contract points to a serious flaw in social contract theory, IMO. Intuitively, I know perfect murders are wrong, which raises a red flag for me when I look at social contract theory; digging deeper, I discover that I don’t base morality off SCT at all, but rather off a sense that what’s wrong when done to me is probably equally wrong when done to others relevantly similar to me.