I found the New Scientist article I remembered:
Moral outrage
New Scientist vol 173 issue 2325 - 12 January 2002, page Page 11
And another article that is relevant,
Together we are stronger
New Scientist vol 177 issue 2386 - 15 March 2003
Both can be seen online in the archives at www.Newscientist.com, but you have to register. It’s free for seven days, or completely free if you subscribe to the paper magazine.
The experiment they discribe is more or less as I summarised. Players in the game were allowed to invest “monetary units” into an investment pool that always returned 1.6 times the investment, but the dividend was shared equally between the players regardless of who invested what.
Equal investments gave equitable returns, but to maximise your own return you could free-ride - invest little or nothing and take your share of the dividend anyway. Other investors would find their share to be below their investments in that case.
When played openly and face to face, people tended to play fair and everyone made moderate profits. But played anonymously, cooperation disappeared as people took the oportunity to free-ride.
When the option of punishing the anonymous free-riders was introduced, at a cost to the punisher, cooperation was re-established. Players finding their return smaller than their investment would altruistically punish free riders rather than relying upon other players to do it. Fear of punishment enforced cooperation.
The experimenters claimed that a sense of “moral outrage” was built into the players. Righteous anger drove players to punish cheats even though it was personally costly to do. Whether such moral outrage is biological or cultural is debatable. The same kind of game played with primates may yield some answers!
erislover said:
“The only way I can see that this is sound is if we define moral advantage as biological advantage–something that I have found to be a perilous proposition that, in fact, runs contrary to the very morals we hold (and hence would require inconsistency in the theory–not the hallmark of profundity), and in any case begs the question to be answered which is whether this is in fact the case.”
You’ve cut to the heart of the matter. In The Origins of Virtue, Ridley described variations on “Tit for Tat.” The strategy of tit-for-tat was to cooperate with others the first time around, and then do to them what they did to you forever after. In very simple games, it proved to be the optimal strategy.
A weakness in tit-for-tat arises when you allow weighted “misunderstandings” - say one in ten cooperations is mistaken for betrayal, but only one in twenty betrayals are mistaken for cooperations. Then, tit-for-tat players become locked into cycles of retaliations that are re-established more often than they are broken.
In that case,“tit for tat” then becomes less effective than “forgiving tit for tat”, which forgives a betrayal one time in three. This allows misunderstandings to be rapidly resolved, so a population of “forgiving tit-for-tat” will be more successful than one of simple “tit-for-tat” which is torn apart by vendetta.
What I find fascinating about this is that from very simple games, a basic tenet of our morality (forgiveness) can arise as an optimal strategy. It is far from a proof that moral advantage equates to biological advantage, but it is interesting nevertheless.
Debates like these make me wish TVAA hadn’t been banned. He would champion the ultra-materialist view, which kept things lively!