Puppets, eh? Well what does that say about free will, hmmm? :dubious:
The reason I reject the social contract argument (for the purposes of this discussion) is that I am not in a social contract with cats.
Nevertheless, if a cat is being tortured, or otherwise suffering, I feel bad about it. Why?
There must be something beyond the social contract that makes us care when another sentient being is suffering.
Yes! An argument along the lines I was hoping for, and it took 79 posts to get here. (I guess I should explain myself better in the future )
Not that I am fully convinced by the above argument, but this is precisely the type of reasoning I was looking for.
No.
However, even though the human experience of logical thought is limited to humans, logic itself is not.
I’m not a logician, but I think that
(A=>B and B=>C) imply (A=>C)
is a truth that exists with or without humans having ever existed.
(I’m pretty sure several people here will shred the above idea to pieces)
You are correct. This is simply an academic exercise.
Yes. To be more general “a logic-based explanation of why suffering in other sentient beings is something that should be avoided”
Skipping over the debate so far and giving a possible answer to the OP. (Not sure if I believe in the premise either, but I’m taking it as a given for the purposes of this post.)
Yes, they’re all just machines at the root of our being. On the other hand, each human being is a unique and irreplaceable machine… in that we do not have any way to construct or reproduce a machine that will work in exactly the same way. (Heck, even getting one of these machines to work the same way repeatedly can be a problem, but that’s kinda beside the point.)
Secondly, they are fragile machines, in that compared with most other machines, it’s relatively easy to damage them so that they do not ‘work’ and will never work again.
Thirdly, they are machines that can accomplish some remarkable things… I’m not sure if you’d consider the art of Picasso to be a great accomplishment if we’re just meat machines… he was just one machine who was able to put some colors together on a canvas in such a way as to make a lot of other machines stare at it for long times and talk about it among themselves… but building other machines that could travel to other planets… that’s gotta count for something, huh??
Okay, I started this post thinking I was going in a slightly different way than I seem to be ending it. I still say that even if we’re just machines, then each human ‘machine’ should be valued for its uniqueness and the ineffable potential that each machine represents… but if the rest of humanity is just meat machines too, then how many of the things that each machine has the potential to do are actually worth anything in and of themselves, as opposed to just appreciated by the other machines?? Hmmmm…
That’s about as far as I can get for now. Anyone else wanna continue on with that thought maybe?
Ah, I see. I’m sure the title “For people who are convinced we are meat-machines and who will also please discount the fact that we are guided far more intensely by our emotions than by logic: why is killing bad?” would have been a touch too long. However, it would have made what you sought clearer. But if you knew exactly what you would accept in response at the outset…
Oh never mind.
I’m confused. This sounds rather as if you were hoping that someone would come up with the same answer that you did, and now your’re rejoicing that this is the case.
If so:
-
What was the point of the exercise?
-
Just because someone else eventually suggested it doesn’t guarantee that it’s correct.
-
It’s certainly not obviously what SentientMeat himself suggested, and looks as if it’s simply another line of argument that he suggests can be made.
So, again, what was the point? Or is this the sort of answer you’d hoped for but had not come up with yourself, and you find this one satisfying?
I’m confused. I said:
Where in this did you see me claim that the argument SentientMeat made is “the same answer I came up with”?
-
I have not come up with an answer because I myself am not 100% convinced we are meat machines
-
If I had come up with the answer, why start this thread?
-
I was simply stating that the type of argument that SentientMeat used is the type of argument that I was looking for. Not his specific argument.
And to be more specific, the type of argument I’m looking for should not be of the type:
-
“It’s bad because we evolved to think its bad”
We evolved to think people from other tribes/races are inferior and/or should be destroyed, so we can’t always use how we evolved to derive morality -
“It’s bad because life is inherently valuable”
Why is it inherently valuable? -
“It’s bad because of the social contract”
But we still care about animals, and we don’t have a social contract with them
My apologies, Polerius, but it seemed an awful lot as if you were saying “It took you this many tries to get to the answer I wanted you to get to.”
Which is exactly why I asked the same question you re-asked:
But why exactly do you reject this argument? This is the reality. Yes, human life is meaningless. But I don’t care that it is meaningless.
Our emotions were given to us by evolution. I love my wife because mammals that experience the desire to reproduce tend to reproduce. I love my daughter because mammals that experience the desire to care for their offspring tend to have more surviving offspring. I don’t want to die because organisms that don’t care whether they live or die tend to die, and dead organisms tend not to reproduce. I don’t want my wife and daughter to die because mammals that didn’t care about such things never left any descendents. I don’t want my friends and family to die because humans evolved to be social animals and live in groups, and early humans that didn’t care about such things never left any descendents. I am the result over millions of years of the successful reproduction of organisms that wanted to live, wanted to reproduce, wanted to care for their young, and wanted to live in social groups.
I can’t help but want to do the same thing. Even though I know WHY I feel the way I feel, it still doesn’t change the fact that I DO feel the way I feel. As another example, sugar tastes sweet to me, I like to eat sweet things. Sugar tastes sweet to me because my ancestors evolved to eat fruit, they evolved sugar receptors to enable them to detect sugar in fruit, and they evolved a desire to eat sugary things. I know why I have the desire to eat sweet things. Does that mean that since there is no real “reason” to crave sweets that I should stop eating sweets?
I have a desire to eat sweet things because I evolved that way. I have a desire to love other people because I evolved that way. I can’t stop craving sweets just because I know evolution gave me that craving, I can’t stop loving other people just because I know evolution gave me that love.
So there IS no reason for an intelligent robot to have empathy for other sentient beings. There is no reason for an intelligent robot to have any desire to reproduce. There is no reason for an intelligent robot to want to continue its existance. It would only want to do those things if we programmed it to want to do those things. We can’t threaten to destroy an intelligent robot if it harms humans, since the intelligent robot would have no desire to prevent its own destruction. In fact, an intelligent robot with no emotions would have no desire to do ANYTHING.
If we want to create a sentient robot that will act similar to how humans act we will have to provide it with emotions in some way. Our emotions were given to us by evolution. That doesn’t mean that we don’t feel them. I don’t kill other people because I’m programmed (by evolution) not to. I have no desire to change my programming, because evolution hasn’t programmed me to want to change my programming. It doesn’t matter whether my innate desires are logical or not, they are my desires, and I have no desire to desire anything else.
I think Polerius has a point here. The fact that we evolved to have certain emotions doesn’t necessarily mean those emotions are “right” or that we should act on them.
We also evolved to feel jealous or lazy at times when we probably shouldn’t, and to stock up on foods rich in salt, sugar, and fat far beyond the point of necessity. We evolved to be promiscuous because it can increase the quantity or genetic quality of our offspring. We evolved to have a strong sex drive in our early teens, and the desire to alter our consciousness with alcohol and drugs.
Most people, I think, would say that we should fight to overcome our built-in desires in at least some of those cases. If we can use logic to convince ourselves not to have an affair, then how do we know we shouldn’t also use it to convince ourselves not to be empathetic?
But drugs and fat and the other things you listed are bad for us in the long run. Empathy isn’t.
Now, what is bad? Well, that’s entirely arbitrary. I don’t think it’s possible to reason bad or good. You just have to make a good sounding axiom. Personally, I believe happiness to be good, from which all my morals (including the belief that killing is wrong) come from, based on a logical application of that.
Not necessarily. Surely there are people who haven’t been significantly harmed (or made less happy, to use your definition of “good”) by drug use, or cheating, or having sex as teenagers… and surely there are people who have been harmed by acting empathically instead of in their own best interests.
And as you said, how do you define “bad for us”? One might say things that promote the growth of our species are good for us, and assuming the behaviors I listed evolved through natural selection, that would make them good by definition.
If empathy is in fact good for us in some objectively definable way, then there’s no need for an appeal to emotion. You don’t need to say it feels right to act this way if you can prove that it is right.
It appears that what you are looking for is an argument that judges the value of particular values by using values. Do not be surprised if you get some circular discussions. It is unfair to judge values using values. Or at least silly to expect to come to a conclusion other than that the value system you use is the correct one. The closest to objectivity you can have is to analyze according to utility. Whether you reject it or not.
We developed with drives and the means to control those drives and the means to occassionally circumvent our control. We evolved in groups and evolved in a manner that allowed us to exist in groups with a desire to conform to social norms, to follow rules, and occassionally to not follow those rules. These were selected out of utility. Societal structures developed within the context of those ingrained predispositions and overlaid more complex systems of mores which we each learn as members of society whether we accept souls or God or Gods or not.
Empathy is a fuzzy feeling that which will show up in fuzzy tests desgned by fuzzy people. The fact that a murderer who happens not to be a psychopath doesn’t flinch when shown a picture of a person burning to death, about to be crushed by a tank or being frogmarched into a Michael Moore movie doesn’t prove very much. It’s more of an instinctual reaction than a an emotional response the way you describe it, anyway.
Polerius, I’m suggesting that the overall “reason” for meat-machine morality is a combination of many factors, from the empathy response (which appiles to animals or whatever else can convince us it is “suffering”) to the probabilities output by our prefrontal cortex when considering our own suffering. That last suggestion was really aimed solely at those who want an answer based on “pure” reason, which I think highlights nicely the absurdity of demanding an explanation for the actions of the temporal lobes (emotion) solely in terms of the actions of the frontal lobes (reasoning) :). Still, I’m glad I may have been of service.
So what do you call that instinctual reaction, and how do you explain its presence in normal subjects and absence in subjects who show an overwhelming correlation with causing harm to other humans or animals?
Polerius, perhaps you will be successful in your quest if you turn your question around. Maybe if you can come up with a strictly logical, person-specific reason why a meat machine without emotions should continue to live, you may find the specific argument you seek for yourself.
Incidentally, I heartily endorse the point that appealing to evolution as sole justification for moral actions is a fallacy of oversimplification. Great Debaters must me continually reminded that to understand is not to excuse. The particular moral framework we as meat machines erect democratically is effectively a system of conclusions based on axioms which incorporate our neuropsychology, and different machines will prioritise or dismiss different forms of those axioms. Even then, there will always be tricky cases in which (warning: oversimplification imminent!) our frontal lobes’ reasoning comes into conflict with the temporal lobes’ emotions, such as killing a suffering individual (euthanasia) or an individual who we reason will cause great suffering to others in future (as in war).
So we must continue to encode our outputs on screens (like we’re doing right now) so that we can all share our axioms, reasoning and emotions, and perhaps a meme whose consequence is that less killing happens will flourish. In the words of Pink Floyd and Stephen Hawking, all we can do is keep talking.
SM,
No it is not oversimplifying, it is restricting an answer to “how”. “Why”, if it exists other than for utility, is beyond the scope of anything other than revealed axiomatic truths.