The April issue of Doscover Magazine had a very interesting article about evolution and moral neuroscience. I just wrote an article on it here which you can read the whole thing and not just what I am snipping here if you want:
http://www.after-hourz.net/ri/morality.html
Several dilemmeas were presented:
Suppose you are walking by a pond and there’s a drowning baby. If you said, “I’ve just paid $200 for these shoes and the water would ruin them, so I won’t save the baby” what would that make you? Virtually everyone would be in agreement that it would make you an extremely awful, horrible person.
Yet, as Green says, “there are millions of children around the world in the same situation, where just a little money for medicine or food could save their lives. And yet we don’t consider ourselves monsters for having that dinner rather than giving the money to Oxfam. Why is that?”
Another example is the Trolley Experiment:
“Imagine you’re at the wheel of a trolley and the breaks have failed. You’re approaching a fork in the track at top speed. On the left side five railroad workers are fixing the track. On your right side, there is a single worker. If you do nothing the trolley will bear left and kill the five workers. The only way to save five lives is to take the responsibility for changing the trolley’s path by hitting by hitting a switch. Then you will kill one worker. What would you do?”
This seems relatively straight forward. The greater good is to pull the switch and save the five but lets look at the situation from a slightly different angle
This time imagine that you are watching the runaway trolley from a footbridge. “This time there is no fork in the track. Instead, five workers are on it, facing certain death, But you happen to be standing next to a big man. If you sneak up on him and push him off the footbridge, he will fall to his death. Because he is so big, he will stop the trolley. Do you willfully kill one man, or do you allow five people to die?”
Logically, both of these thought experiments should have similar answers. The greater good requires sacrificing one life for the five but if you poll your friends you will probably find that many more are willing to pull a switch than sneak up behind and push a man off a bridge. It is very difficult to explain why what seems right in one scenario can feel so wrong in another with similar parameters. Evolution may hold the key to unraveling this mystery.
What then is the difference here?
As the article suggests:
As I wrote in response: I find this hypothesis or theory very interesting. It may also offer an explanation for the pond example above. In a social species protecting one another from imminent danger through direct physical contact was probably a normal or routine life experience and it may have been hardwired into us by millions of years of evolutionary development. Worrying about children overseas or sending money to Oxfam wasn’t.
Also I wonder if file-sharing can be explained under this paradigm. Would you steal something from a friend? Most of us shun theft and would never go up to a person we don’t know and physically take money out of their wallets. Direct theft such as this was probably shunned by our ancestors who has a sense of fairness as well. Yet we have no problem using Bittorrent or Kazaa to download (steal) music from these same people. One feels very personal and one doesn’t. One feels very wrong and one doesn’t.
This hypothesis or theory then seems to have some merit to it and neuroimaging is stacking up evidence. The article has a bunch of moral conundrums, info on Green’s method, evidence of fairness in primates, the ultimatum game and a few other things if you are looking for more information.
Also, I reccomend getting and reding the Doscover article itself (pp. 60-65).
What do you think of this scenario? I find neuroethics fascinating.
Vinnie