Forgive my incredulity, but, the Prisoner’s Dilemma as an example of a moral principle that can be derived from evolution? You’re aware of why there is the word “dilemma” in it, yeah? Traditionally, the optimal solution for each prisoner is to rat, and the optimal solution for both prisoners is to cooperate (in your first link, however, the payoff matrix is different than others I’ve seen; of course this is simply a facet of context: payoff matrices change). In any event, in a two-player game like the Prisoner’s Dilemma, there are two methods of evaluating the payoff: in terms of individual players, or in terms of the total payoff they’ll receive. Each perspective is justifiable to a point, more or less so depending on your perspective of the game and how much context is abstracted away.
In the case of iterated PDs then the “tit for tat” shows some merit, at least in that it stabilizes outcomes (they become predictible, which, really, is to be expected if all parties involved follow a rule for behavior). But this requires that both parties adopt the same method of evaluation, which, in our thread here, is precisely what is under contention. Furthermore, the character of the game itself is likely to change the behavior. Finite-round play is usually different than infinitely long play (or rather, play of indeterminate length).
I have not really studied game theory, but my readings have not lead me to believe that there are guaranteed optimal solutions to all problems. The problem being, of course, that “optimal” is not necessarily an objective concept itself, it requires accepted standards of evaluation. Game theory, like all mathematical study, tends to abstract away subjective qualities and rely upon core assumptions. This does not mean said subjective qualities (“What is optimal?”) disappear, or that, if a game defines a quality like “optimal” that this will necessarily hold true for all players.
The link between PDs and evolution still escapes me, however.
Quite. This is not to say that something is self-justifying, however, it is simply to say there’s no more support to give.
This would be circular reasoning or tautology. I haven’t argued against the fact that a rational ethical system would declare its own assumptions sound and valid; indeed, they would trivially be so. This is why we follow them: because they’re right.
I remember. I was against your use of the so-called naturalistic fallacy where morality is equivocated with natural events or characteristics (in this case, evolution). It isn’t anything in the sense of “A is A it isn’t B”. Morality is morality: it is what we say people should do, or at the very least what I should do. That we may appeal to other sources doesn’t make morality those sources. Even to a cultural relativist morality is not equivocated with culture, culture is simply the last (or, if you prefer, first, or perhaps ultimate) thing we may appeal to.