Morality is Hardwired by Evolution?

The April issue of Doscover Magazine had a very interesting article about evolution and moral neuroscience. I just wrote an article on it here which you can read the whole thing and not just what I am snipping here if you want:

http://www.after-hourz.net/ri/morality.html

Several dilemmeas were presented:

Suppose you are walking by a pond and there’s a drowning baby. If you said, “I’ve just paid $200 for these shoes and the water would ruin them, so I won’t save the baby” what would that make you? Virtually everyone would be in agreement that it would make you an extremely awful, horrible person.

Yet, as Green says, “there are millions of children around the world in the same situation, where just a little money for medicine or food could save their lives. And yet we don’t consider ourselves monsters for having that dinner rather than giving the money to Oxfam. Why is that?”

Another example is the Trolley Experiment:

“Imagine you’re at the wheel of a trolley and the breaks have failed. You’re approaching a fork in the track at top speed. On the left side five railroad workers are fixing the track. On your right side, there is a single worker. If you do nothing the trolley will bear left and kill the five workers. The only way to save five lives is to take the responsibility for changing the trolley’s path by hitting by hitting a switch. Then you will kill one worker. What would you do?”

This seems relatively straight forward. The greater good is to pull the switch and save the five but lets look at the situation from a slightly different angle

This time imagine that you are watching the runaway trolley from a footbridge. “This time there is no fork in the track. Instead, five workers are on it, facing certain death, But you happen to be standing next to a big man. If you sneak up on him and push him off the footbridge, he will fall to his death. Because he is so big, he will stop the trolley. Do you willfully kill one man, or do you allow five people to die?”

Logically, both of these thought experiments should have similar answers. The greater good requires sacrificing one life for the five but if you poll your friends you will probably find that many more are willing to pull a switch than sneak up behind and push a man off a bridge. It is very difficult to explain why what seems right in one scenario can feel so wrong in another with similar parameters. Evolution may hold the key to unraveling this mystery.

What then is the difference here?

As the article suggests:

As I wrote in response: I find this hypothesis or theory very interesting. It may also offer an explanation for the pond example above. In a social species protecting one another from imminent danger through direct physical contact was probably a normal or routine life experience and it may have been hardwired into us by millions of years of evolutionary development. Worrying about children overseas or sending money to Oxfam wasn’t.

Also I wonder if file-sharing can be explained under this paradigm. Would you steal something from a friend? Most of us shun theft and would never go up to a person we don’t know and physically take money out of their wallets. Direct theft such as this was probably shunned by our ancestors who has a sense of fairness as well. Yet we have no problem using Bittorrent or Kazaa to download (steal) music from these same people. One feels very personal and one doesn’t. One feels very wrong and one doesn’t.

This hypothesis or theory then seems to have some merit to it and neuroimaging is stacking up evidence. The article has a bunch of moral conundrums, info on Green’s method, evidence of fairness in primates, the ultimatum game and a few other things if you are looking for more information.

Also, I reccomend getting and reding the Doscover article itself (pp. 60-65).

What do you think of this scenario? I find neuroethics fascinating.

Vinnie

Are you sure it was this April’s Discover Magazine? I haven’t read it yet, but I read an identical article to the one you describe at least a year ago. Maybe I read it somewhere else.

If morality doesn’t come from evolution, where does it come from? It seems to make infinite sense to me that it would. But more likely it’s a genetic predisposition to a certain morality for humans, sort of stacking the deck. Not a complete hardwire situation such as would be seen in the social insects.

I double checked since maybe the store accidentally put a year old magazine on display but it is in fact the April 2004 edition of Discover Magazine. A big picture of Mercury is on the cover.

Yes but naturally some theists might dispute this. They might claim morality is primary bestowed by God at birth through soul reproduction (traducianism), through the indwelling of our souls or whatever. But i agree with you though. Morality must be seen as part of our evolutionary development. Even theists have to recognize this as primate behavioral studies verifies this. As a panentheist (as opposed to a traditional theist) I have no issues with a strict natural development of morality.

I agree with this as well. THis was my final section which touched briefly on this very issue:

Thanks for the response. I find this issue fascinating.

Vinnie

No, because of the uncertaintly of outcomes.

If you switch the levers, the one person might still get out of the way.

If you push the big man, the trolley might still be able to kill the five people.

By switching the levers, you aren’t intentionally killing the one person. You would still be hoping that the person escapes.

Because a rational brain understands that outcomes aren’t guaranteed, these actions aren’t created equal.

Julie

That is correct but the brain understand highly probably outcomes and the thought experiments made it clear the outcomes were certain or if you want, I suppose you can say “highly probable”.

Suppose the 6 railroad workers are using high noise equipment and have their backs to you with ear plugs. The probability is extrmely hight that one of five will die. So which one do you choose?

The other problem with the big guy stopping the trolley is a bit more fictional but can easily be changed and the way the problem was worded makes it irrelevant because its set up so you “know” the outcome. Its a thought experiment.

Vinnie

You may be interested in"The Evolution of the Golden Rule" in the 2/20/04 edition of Science. Subscription required for full text.

Highlights (extensively snipped):

Unfortunately, that’s not the way the world works. For one, there are distribution problems, mainly on the part of autocratic foreign regimes who are using starvation as a political tool. Secondly, the law of supply and demand states that as we try to help the starving, the means to do so would get more expensive until we could no longer afford it.

However, if caveat #1 were not true, and the point of diminishing returns had not been reached yet, we would be monsters, in the same sense as the first example. It is true that we would not have such a visceral reaction, and the cause of that is evolution, but that does not lessen the way I would in theory weigh the situation. And many others agree with me.

Its an interesting idea, and one i’ve heard before. I can think of one objection though - how would these traits evolve? Obviously a population which is altruistic will be more successful than a population which isn’t. But evolution acts on the genetic level. You would imagine selfish behaviour would be the “natural” state. Given that, if a gene appears which made its bearer more altruistic it would be selected against, because it actually provides a disadvantage to its bearer.

Having said that i do think it is very likely that altruism/morality/fairness are traits that have evolved, but i can’t see the evolutionary “pathway” that might have happened.

I’m sorry, you lost me here. Can you elaborate a bit? Thanks.

Altruism is selected for to the extent that the individual being altruistic shares genes with the beneficiary. Thus you, in an evolutionary sense, “should” be quite altruistic to your children, a little altruistic towards your nephew, and just a smigen altruistic towards your first cousin. However, at some point after most of this altruism evolved, language came in, and, with it a heightened ability to see analogies. One such analogy: the needs of your children are analogous to those of your neighbors, and, even those of your military adversaries.

If I am right, a million years from now evolution will catch up with languge and bring us back the Hobbian world of pre-linguistic man. This explains why SETI can’t find anyone. The escape from a Hobbian world is temporary, leading to all civilizations, at least in this corner of the galaxy, eventually destroying themselves.

As you can see, I am not an optimist.

YOu seem to have fallen victim to generalization in the following:

These problems exist and can cause us not to be able to totally eliminate the problem without war but there are tons of children who can be fed now just by sending in a few dollars. So this does not get us off the hook. Not to mention that these regimes should be eliminated.

Second, what you are saying is that there is nowhere to draw the line in how much we spend on helping the starving. But some aide is better than no aide and we all–most of us anyways–live beyoind what we need to. That is the point. We could divert some of our funds and if everyone did a little a huge portion of the problem would dissipate.

I am told there is enough food on the planet to feed everyone. This elimates the problem except for the “autocratic foreign regimes”. I mean, kill the space program for a few years and you might save a couple billion dollars that could be spent here.

Vinnie

Even if we were perfectly rational example would make no sense.
If we place a value, x, on human life then it is obviously worth spending >x to save that life. But rationally we also have to place a value on our own lives, our own prestige and our own success. And I am talking rationally here, not emotionally. The reason a man can afford $200 shoes is because he is successful. Such success comes in part from how one presents oneself to others, as well as from self-esteem. $200 dollar shoes help with both of those.

There is absolutely no value in a man giving up his $200 shoes to save one person and losing his salary and so be unable to provide $10, 000 over the next few years. An individual needs to take care of themselves to have any value to others. There is no point in feeding 1 starving child when that decision will cause 10 children to starve.

Now we can of course debate whether $200 dollar shoes generate $200 worth of income. I couldn’t tell you, I don’t own $200 shoes. But the fact is that the situation is complex. By pampering me I make me successful. I make m city successful. I make my nation successful etc. The US is the biggest provider of foreign aid in the world. t can be this way because it has a consumer driven capitalist society. There is little point in sacrificing all that aid by shutting down the consumerism.

Can someone find me a country where the wealthy can not afford $200 shoes that provides per capita or per GDP more foreign aid than the US? Maybe that should tell us something.

People may or may not realise intellectually that the issue is complex. Personally I doubt many people give it thought. But I do think that people realise that if they decide to invest in foreign aid rather than shoes they will logically never be able to invest in shoes ever again. There are an infinite number of starving children. It makes little sense to say that the person should forgo shoes just once in their life, since most people donate far more than $200 to charity over a lifetime.

Compare that to the act of diving into the water to save a child. Yes it sacrifices the shoes, but it does so in only one instance for definitive effect. People know that this one definite act need only be carried out once for benefit. While they may not be able to sacrifice their shoes forever without the overall effect being harmful, they can do so this once without the effect being noticeable. Perhaps if their was a UN Superman fund that would perform such rescues on our behalf that we could donate too their might be some rationale behind the analogy. But their isn’t. The only way to prevent this type of tragedy is to sacrifice the shoes. There are other less harmful ways to prevent a child starving to death than burning one’s shoes.

You really should get your hands on that Science article.

What you reference here is “kin selection” and that is certainly part of it. We are most likely to help our children, next family, next that which we percieve as tribe which is likely to share common genes.

But there is more to the story because we live in social groups and the behaviors that we develop then rapidly change the environment that is the social group providing a positive feedback for the tendency for cooperative behavior. A group that has a tendency to cooperate with reciprocity and punishment of cheaters will survive better and its members will reproduce more. It will become the dominant tribe and garner resources from other tribes. Its beliefs, that led it to thrive, will become codified as a religion with axioms to support it. The idea will be selected for as well as the genetic tendency. The idea can spread to other groups and cause selection of those that can thrive within that environment; those who cannot cooperate get killed off or fail to mate.

Break it down to steps in a lmore linear fashion:

Next from kin selection is “reciprocal altruism” - helping someone non-related because they will help you, or have helped you in the past. This requires the ability to detect cheaters and to minimally refuse to help them in the future. (Some believe that such a process of cheater detection and attempts to game the system was a major force in the evolution of the human brain.)

Then is “indirect reciprocity” - helping an unrelated someone who will be unlikely to return the favor because it gets you a reputation as someone to be trusted. Thus witnesses will be more likely to help you knowing that you will be there for them. The behavior is selected for in the individual and the group benefits causing the idea to be selected for as a meme.

And “strong reciprocity” - the desire to punish out of a sense of fairness even if it costs you to do so. Clearly a rep item for the individual. I don’t want to cheat that fellow, I’ve seen what he does to cheaters. And causes a positive selection for the group and for the idea. And then for those who have the ethical and cognitive capacity to thrive within an environment that has strong enforcers.

BTW “thought experiments” are often limited by their very artificiality. It is like creating a perceptual illusion. Interpret with caution. Our ethical processes evolved for function within fairly small social groups. Function in huge global societies is a very recent development on an evolutionary scale.

Example two is even more dodgy IMO. The big problem I see is that it hinges entirely on a human life being the only thing of moral value. Once we accept that human life is not the only thing of moral value then it becomes nonsense.

Let’s look at just one other thing that people believe has value- freedom of choice. Many people believe that under many circumstances freedom of choice has a greater value than life.

So we have a train about to run over some people. No one has any choice in this matter. As driver I have no choice except in who the train runs over. The train <I>will</I> run over someone. The workers have no choice at all. They will or will not die based on events beyond their control. So really in this scenario no-one has any capacity to apportion freedoms. All the players only have the freedom that circumstances have dictated.

Now look at the second scenario. The fact man has a choice. He can jump in front of the train or not. He can die or not. He has that freedom. But If I push him I remove that freedom form him.

Few Americans will dispute that liberty has a moral value, and that the value may be higher than life at times. Once we accept that, then this example becomes incredibly poor. We can not in any way conclude that ‘Logically, both of these thought experiments should have similar answers. The greater good requires sacrificing one life for the five’. That conclusion is completely illogical until the author can show us how he logically determined that the liberty <I>and</I> life of one fat man is of greter worth and better than the lives of five railway workers.

Wars have been fought based on the premise that such liberty is worth more than the lives of any number of men. It would take large cojones to suggest that it is illogical to oppose tyranny if that opposition will cost lives, yetr in essence that is what that example proposes. The liberty of individuals is worthless compard to the lives of five. That is a proposition that reject logically, emotionally, ethically and intuitively. It makes sense on any level, no matter how I consider it.

SteveEisenberg and DSeid

You are both correct. I remembered one part of how evolution operates by gene selection, but forgot another part! Thanks for addressing my objection.

I’m afraid i don’t really follow your “evolution catching up with language” theory though, SE, but that may be a subject for another thread.

The idea that morality has evolved is the thesis of a book, The Origins of Virtue by Matt Ridley, which is well worth a look. It includes a fair bit of game theory, among other things. “Tit for tat” is only the start.

DSeid said

“A group that has a tendency to cooperate with reciprocity and punishment of cheaters will survive better and its members will reproduce more.”

Punishment of cheaters is interesting. The problem of people taking advantage of cooperation by screwing the cooperators over has been given many names. The Prisoner’s Dilemma is the two person version of it, the Problem of Free Riders is the multi-person version, and The Tragegy of the Commons is an example of the multi-person version.

I remember reading an article (probably New Scientist, but I can’t recall exactly) about an experiment where a cooperating group were allowed to punish cheats, but punishing the cheats cost each individual more than just letting them get away with it. People tended to punish the cheats anyway, even to their own detriment. The experimenters argued that a tendency to punish cheats even if it expensive to do so has evolved because it promotes a greater degree of cooperation.

This [foregoing discussion] does not, however, suggest to me that morality is hard-wired i.e. completely biological deterministic. Even if we reject tabula rasa theories we do not impose biological determinism. Even if we suggest a materialist metaphysic we do not impose biological determinism (environment still plays a part). Only if we propose biological determinism are we compelled to accept biological determinism, which I should trust is obvious.

This is because we consider two forms of selection: one is biological selection, that is, traditional evolution; the other is social selection. I do not see any way to suggest the two are equivalent or that each entails the other. In fact I don’t see that either one entails the other (that there is necessarily any strict implication that will hold). I will spend some time here discussing each of the two entailments.

Something I would like to note. Evolutionary psychology wrt morality (herein: evolutionary morality) will [potentially] explain why we have the morals we do. But it cannot be a complete account of morality because the limitations of finite existence demand that not all permutations are tried. Moral realism, as a philosophical view, attempts to derive optimal conditions under the assumption that such conditions exist (though it cannot, of course, guarantee it will find them). Without this study or attempt, there is no guarantee that we will obtain an optimal morality–or even that we will obtain a functioning morality (complete annihilation is currently possible and will likely never become impossible).

As such, a naturalistic account of morality is not sufficient unless it can account for moral realism. This is not a particularly specious result–those who think about such things as moral questions might, as moral realists assert, have a moral advantage (but this is to say it isn’t completely unreasonable; its assertion is still the fallacy of affirming the consequent, something that will often crop up when we try and consider morality entailing survival). The problem here is that we might begin to require that moral advantage entails biological advantage (i.e.-survival). The only way I can see that this is sound is if we define moral advantage as biological advantage–something that I have found to be a perilous proposition that, in fact, runs contrary to the very morals we hold (and hence would require inconsistency in the theory–not the hallmark of profundity), and in any case begs the question to be answered which is whether this is in fact the case.

The reason this is difficult to swallow is because it is not clear how we are to consider each side of the
morality -> biological advantage
implication. Morality often involves individual action, while biological advantage in a Darwinian sense never involves individual advantage (it necessarily applies to a population within a species). Accepting the implication, then, entails rule utilitarianism under the condition that what we seek to maximize is survival. Note, however, that we are no longer guaranteed that this form of rule utilitarianism will maximize happiness (which is what utilitarian variants normally shoot for). That is, we cannot rule out that moral perfection entails a sort of rotating slave society where everyone is, on average, unhappy. Happiness and survival might, at the level of populations, be contradictory–meaning we must authorize the extermination of some (decrease populations) or authorize an increase in unhappiness. This is not a result I think most would find acceptable, leaving the burden of proof to those who think it is the case as to how something like this would naturally occur, or to show how the two conditions are not possibly contradictory.

It is interesting to note that philosophers have, from time to time, suggested moral systems which do not guarantee to maximize happiness (that happiness is not a function of morality). However, they were not (to my knowledge) either concerned with maximizing survival, so their mention only deserves a passing commentary.

Any practical morality will have to accomplish two things: one, that it will serve the purpose it is declared for (maximizing happiness, survival, or whatever); and two, that enough people will follow it without undermining it. The second condition is especially important, because moral obligation does not necessarily entail that any particular person will actually act morally. So devising a workable ethic is not a trivial task. Game theory is promising, but it is far from the level of complexity that is required for populations we encounter. Also note that the first condition of a practical moral system can impact the second condition: knowing the “winning” conditions (to stick with a game theoretic semantics) can impact whether or not we will follow it. For example, if a father deduces that killing his son is the moral option, it is not guaranteed that this man is Abraham. If all fathers will be able to deduce this it is intuitively likely that no father will accept the moral system–even if they are never in the situation where it would be the moral option. Merely knowing that the system requires something in one circumstance might affect whether someone accepts the entire system.

Above I dealt with morality entailing survival, but this is not the only option I mentioned. The other is that survival entails morality. That is, if we are surviving, we are acting morally. This has a very serious flaw in that intuitively immoral behavior does not entail non-survival (modus tollens). For example, we would intuitively suggest that it is immoral to steal… but an individual theft, or even widespread theft, does not strictly mean no one survives (or even that anyone at all dies!). Does this mean we should accept that one possible morality involves mass theft? But, as a matter of course, we haven’t accepted that, and so how does evolutionary morality explain this anomoly? Also note that this implication is vacuously true when we die–that is, if we fail to survive we cannot necessarily conclude that we acted immorally. And it is precisely this implication that most people want to assert when they discuss the relationship between morality and survival. This leaves us with suggesting that either survival is the case or (exclusive) we are not moral–but not both. That is, it suggests that it is impossible for a moral group to fail to survive. But this is not a practical result, since if the sun explodes and annihilates the earth (and so kills everyone), we would be forced to conclude that there is no possible moral behavior at all which is a really strange result. So obviously the implication cannot be strict.

Eris, this time I find your comments near to incomprehensible. Care to try again?

From what I can make out, you have some very basic misconceptions of how evolution works, and of what has been said in this thread so far, but it is hard for me to tell from what you have down.

For example

is a nonsensical statement. It is akin to saying that a scale can’t have been created to measure weights because it hasn’t measured all weights. “Morality” in the sense of this thread is not a fixed universal. It is merely the means by which we decide which we consider right and wrong. It is a tool that is used to address an infinite variety of permutations and a tool that has co-evolved at genetic and cultural levels. Now then, if you want to use such a tool to evaluate the moral value of morals, (as you proceed to do) then do not be surprised at the logical mishmosh that you create.

Your comments on biological advantage are likewise a bit confused. Survival of the individual is not the point from the POV of evolution. Nor is happiness. Getting more copies of the gene that predisposes for a trait to get to a point that they can be reproduced again is. The strategy that accomplishes that goal may or may not include achieving the first two goals. It does not require that we are aware of that goal.

Yes, at some point one also must consider the organism under selective pressure to be the culture rather than the individual. Cutures can develop strategies that take advantage of the individual’s predispositions to spread itself. And the strategies of cultures can and do change very rapidly compared to the genetic evolutionary scale. Cultures grow both by the propagation of its member cells and by absorption of other groups and by spread of their ideas into other groups albeit often geometrically transformed to new applications.

Cutures can even propagate themselves by creating strategies that take advantage of genetic predispositions to such a degree that individuals who are most willing to comply with those conventions are selected out, and yet the cuture can develop new strategies to take advantage of what biologic predispositions are left with a speed unimaginable on a human genetic timeframe. Examples available upon request.

Is there a universal of morals? I believe so. Such is an illogical statement of faith on my part but I believe it nevertheless. I may merely be a prisoner of my own genetic programming, but I am that which I am. And such is the basis of my soft theism. I believe that the intertwined processes of genetic selection and cultural evolution may be how human morality developed the way it did without being why it developed the way it did.

Note the word “complete account” in the passage you quote, DSeid. I am only suggesting that even if we utilize evolutionary [morality] that it is not a complete account of morality itself–it can only be an account of what is happening. For example, we cannot suggest that amphibians that exist or have existed are the complete set of possible amphibians–not all mutations can possibly be tried. So the evolutionary history of amphibians is not a complete account. That’s all.

I said as much in my post. My point was that we can really conflate moral and evolution as an entailment because morality often deals with individual choices while evolution never does, evolution always works on populations.

Of course not. But happiness is, to some, a hallmark of morality.

Yes, I don’t intend to suggest otherwise at all. I don’t believe anything in my post could be construed otherwise.

I applaud you for seeing through the weakness in modern retard morality, but it doesn’t take as long as all that.
Besides, there is always psychological eugenics, and the concept of Volk…

Though I don’t suppose anyone with “berg” in their handle is going to think much of National Socialism. :smiley: