Probably not. But it conceivably depends on how we formalize the partner-picking aspect of the game; what happens to a player who picks a partner who doesn’t pick them?
All games must be mutual. That is, if Jack wants to play with Jill, but Jill doesn’t want to play with Jack, there’s no game.
And that’s what I think changes it. As I understand, in classical permutations of the game, mean is dominant because you can backtrack from the last round, when there’s no consequence for playing mean. In this version, however, the fact that you might not be able to find someone to play with at all if you play meanly should put a check on mean behavior.
I’m not sure–there may still be some way to backtrack a successful mean strategy from the final round of the final game–but I’m not sure how that would look.
I haven’t yet thought through the analysis of the game with partner-selecting dynamics, but before doing so, there is perhaps some confusion regarding terminology which it would be best to get sorted: a strategy S (laxly) dominates a strategy T if, for all values of the opponents’ strategies, S achieves at least as good an outcome as T. S is dominant if it dominates all other strategies.
In the N-game prisoners’ dilemma, for large enough N (depending on the exact point structure), always defect is not dominant, not even in this lax sense. For example, if your opponents strategy was “On the first move, cooperate; on all subsequent moves, repeat what the other guy did on the first move”, then defecting every time is much worse than cooperating every time.
However, despite that, both players picking the strategy of “defect all the time” is the unique Nash equilibrium. (A Nash equilibrium is a selection of strategies for each player such that no player can improve their outcome by changing their own strategy unilaterally while all opponents’ strategies remain the same). This is where the backtracking reasoning comes in; for any strategy S, the strategy “Play like S except on the last round, where you always defect” laxly dominates it, and indeed achieves a better outcome against any strategy against which S would instead cooperate on the last round. Accordingly, all the Nash equilibria must involve only strategies which always defect on the last round. Continuing inductively, one establishes that these strategies must also always defect on the penultimate round, and so on. But it’s not that “Always defect” is dominant, in the iterated case; it’s just that having every player use it is the unique Nash equilibrium.
And what is the practical point of finding this Nash equilibrium? The altered game seems to be specifically set up where that strategy will produce the lowest number of points.
I’m not trying to be accusatory–I’m just curious.
It’s one model for what counts as a stable situation (hence the name “equilibrium”), on the account that anywhere else is unstable, as players think “Well, if my opponents are going to play like that, then I should be playing differently, to produce a better outcome for myself”. That having been said, it’s only a model; to the extent that the properties inherent in its definition are ones one might be interested in for some purpose or another, it’s worth studying, but that’s all.
Note that it needn’t be the case that the Nash equilibria for some game G automatically corresponds to the Nash equilibria for some iterated variant of the game G; that just happens to occur in certain special cases (e.g., when game G contains dominant strategies for each player and the iterated variant one is looking at is simply “Play it N times and add up the points” for some fixed N).