But in the Traveler’s Dilemma, as presented, there is not supposed to be prior agreement, Chronos. The point I was trying to make is that it is the right who assumes the public is (generally) of good heart, and therefore it is not as necessary for government to be as invasive in redistributing wealth.
The whole point is that we do not trust one another to behave charitably, and that’s why we need government to enforce charitable redistribution of resources.
Septimus’s remark, “This should come as no surprise to humans of good heart, though it will be thought impossible by right-wingers of the Dog-eat-Dog Market-is-God religion,” is an unfair jab at the right, because it implies that the high ground for an assumption of charitable behaviour belongs to the left–i.e. it suggests that humans of good heart are on the left, and won’t be surprised.
It is actually the left who, assuming that we dogs will eat one another, want more government and a contractual agreement to redistribute resources.
The only reason there’s not “supposed” to be prior agreement is that if there is, it makes the problem uninteresting by guaranteeing optimal results. The dilemma is interesting precisely because, absent some form of agreement, it contains a mechanism (that admittedly doesn’t always kick in) that would drive outcomes to nearly the worst-case scenario.
Er, why should I expect a linear view across the range of outcomes from $85 to $124 but not across the range of outcomes from $0 to $101? [Minor correction: I said $1 to $102 before, but I was off by one on both endpoints of the achievable range…]
This is also a very different game, since now the bonus more than makes up the difference between even the lowest selection and the highest selection. Its analysis is completely different (it is, essentially, the original misstated version of the game where the bonus matters more than everything else).
If you want to play with the range of outcomes from $85 to $124, each valued less than the one a dollar up, in the spirit of the traveller’s dilemma, we can do so. The players have to select an integer value between 87 and 123, inclusive, and are awarded the lowest selected number of dollars, ± a $2 bonus/penalty for selecting a lower/higher value than their opponent.
Well, I never made any claims or reasoning involving the relative preferences of 1% chance of $100 vs. a 100% chance of $2. Nothing I’ve said is based “A 1% chance of $100 is worth less than a sure $2, a 2% chance of $100 is worth the same as a sure $2, a 3% chance of $100 is worth more than a sure $2” type nonsense.
And I never said how I’d actually play the game if you put me in a room with a random schlub tomorrow.
But, sure, all kinds of different models of what my opponent was going to do would lead to different choices on my end.
The interesting thing about the Nash equilibrium, I feel, is not that people would start out playing that way, but that, under certain circumstances, their strategies would morph over time such that they settled down into that equilibrium.
What would happen if you put me in a room with a random schlub tomorrow is that I would not play 100. I might well play 99, using my assessment of how people, currently generally neophytes to this game, are likely to play, but there’s no reason I would ever play 100 against an independent opponent unable to take any future action against me. [Scenarios where I would play 100 include being convinced that my actions causally influence my psychic and imitative opponent to make the same move, and being offered a “Let’s both play 100” pact by an opponent with enough opportunity for us to make contract-violation punishable post-game that I would not wish to violate the contract after agreeing to it and would find it plausible the opponent felt the same way]
Good post, but I just wanted to address this point. I think the problem most people have with the thought experiment in general, is this. It isn’t always explicit that the equilibrium is what the trend over time, after many iterations, is what is in question. Rather, people read it as an indication of what the ‘optimal’ case is.
Although, honestly, if 2 people played a few thousand iterations - for real money - I’m pretty sure that they’d figure out that they could just keep picking $100 to maximize their gains.
Yes, if it’s the same two people, and they always know it’s the same two: The tough-but-fair strategy works well there. That might not happen, though, if there is a large population of players, and they’re paired at random, without knowing who their opponents are.
I was reacting to what I understood Septimus to be implying: That the Left has the high ground for an assumption of charitable behaviour, and that humans of good heart would not be found in the domain of right-winger market-as-religion folk. In point of fact, there is some evidence that right wingers are more spontaneously charitable (literally, with real dollars) than those on the left. Cite.
The TD shows us that we need prior agreement, because we do not exhibit charitable behaviour (or we do not assume charitable behaviour will be exhibited).
Since this is not the point of the OP, I’ll give up here on this sidebar.
You can delete it from the wiki page, but you can’t delete from the OP. So, as it is in, it means exactly what I said it meant. That absolute numbers mean nothing and only the relative amounts mean anything. Given that, there are only 3 possible outcomes:
I lose by 4
I tie
I win by 4
So, a bid of 2 guarantees that I can only tie or win.
What the absolute numbers are doesn’t count as much as outwitting your opponent or how you finish compared to him.
If the sentence is deleted, then the optimum bid is a high 90’s bid, probably 99, because with anything lower, it will be me who is costing us money. I can only control what I bid, so I don’t want my bid to cost us what we could have had.
With 99, there are 3 possible endings:
101
99
opponents bid minus 2
It depends on what your goal is. Are you trying to get the most amount of money or are you trying to beat the other guy? Indistinguishable is correct that in most games like this, the goal is to get the most amount of money and beating the other guy is only incidental to this goal.
Here’s a game: You and another person are playing. You’re the player in charge. You are offered two choices. You can pick Choice #1 and you get $1000 and your opponent gets $2000. Or you can pick Choice #2 and you get $100 and your opponent gets $50. There’s all the usual stuff about not being able to share the money.
Which choice do you make? Do you sacrifice $900 in order to “win”?
But why is that interesting, when those circumstances are unlikely to actually happen? What makes those circumstances more interesting than figuring out how people actually think about the problem?
Honestly, I feel that the Nash Equilibrium is actually wrong here. It’s like one of those logic that produces a paradox–the real problem is that the assumptions are wrong. People aren’t going to choose the one possible answer where they are guaranteed to make money. They do not believe it rational to choose to take less money than they could have taken. What you call superrationality is an inherent part of how people actually think about the problem
To whit: There are three possibilities for my opponent. They will either not know the Nash equilibrium (usually called “irrational”), know it and think it’s optimal (usually called “rational”), or know it and not think it’s optimal (which I’ll call the “superrational”). The experimental data shows that both of the first two usually choose a high number, even though the rational players will say they know it’s not optimal. The superrational person thus has no reason to not believe that the person will choose a high number, and thus finds it irrational to choose $2. He thus has no reason to assume that any other player, whether irrational, rational, or superrational, will choose a low number, and thus can safely choose a high number and thus get more money.
Since picking high numbers keeps working out, there’s no reason for any of these people to change their choices. The only destabilization is someone who irrationally doesn’t run the risk probablities and only plays against the worst possible option.
I actually find people who try to change the problem so that the Nash equilibrium is optimal than people who keep claiming it is optimal when the experimental data shows it isn’t.
A person who picks 100 every time will probably end up with more money than the person who picks 2 every time. Any theory that describes picking 2 as the optimal strategy is therefore flawed. The end.
This very much depends on how their opponent plays. For example, if the opponent always plays 2 or 3 or 4 or 5 or 6, the player who picks 2 will make more money than the player who picks 100 (tying when the opponent picks 6).
But if a player puts down 2, he’s never going to get more than four dollars. He’s setting a low maximum limit on himself.
A game theorist can argue that the rational play is to always pick 2 and pocket your two dollars. But two non-rational players will pick 100 and pocket a hundred dollars each time. If the rational object of the game is to make money, you’re stuck with the paradox that people playing non-rationally beat people playing rationally. Or you have to accept that the game theorist’s definition of what constitutes rational play is flawed.
“Rational” means all sorts of crazy things. It’s not really a helpful word in this context.
Every time you hear an economist or mathematician or game theorist or whatever use the word, you should substitute in the word “zational” instead. We’re talking here about zational behavior. Related, sure, but still a distinct concept. It’s nice to play with zational behavior, because unlike real-world rationality it is easy to define and easy to apply it to many situations. Sure, it doesn’t match up to our everyday idea of decision-making, but it can help highlight a few interesting things like the situation that Chronos talks about in post 65:
People might say that this particular situation doesn’t happen, but who knows? Similar situations might indeed happen. Evolutionary theory relies a lot on game theory, and the “players” in that game are dead dumb. They’re genes and they just want to win by reproducing. And if you put those genes into a situation where they’re competing against random other genes, and they don’t have the opportunity to develop relationships and cooperation, we might seriously expect them to evolve into choosing the “two dollar” option. They might actually track toward the equilibrium over time because one single “mean” mutation in the gene pool is going to fuck things up for everybody else if they don’t adapt to be mean themselves. No intelligence here at all, not rationality in any sense, yet the zational Nash equilibrium could possibly hold.
This isn’t intended to be the last word on decision-making, just an interesting possibility to consider which might apply somewhere, sometime, in some strange situation.
For what it’s worth, I’ve avoided using the word “rational” this whole thread. I did mention “superrationality” a few times (after having introducing it as jargon; I don’t think there’s risk of confusing it with ordinary language), but the sole time I mentioned “rationality” was to describe a view I didn’t hold.
It’s important to distinguish games like Prisoner’s Dilemma and Traveler’s Dilemma from the iterated versions of those games. With iteration, creatures learn to cooperate. (Or, perfect logicians can deduce that cooperation is “rational.”)
I think it’s generally accepted that early human trade often had the form of “gift exchanges” in which one party to each exchange was behaving “irrationally” when the transaction was viewed in isolation, but rationally when possible future transactions were taken into account.
(Of course a theme of many crime movies – and, I’ll guess, real crime – is that a criminal party will break a pact when he knows it’ll be the final transaction with the other party.)
BigT, what do you mean when you say “Honestly, I feel that the Nash Equilibrium is actually wrong here.”? It’s mathematically indisputable that this game has exactly one Nash equilibrium, and it’s when both players pick $2. It’s also indisputable that this is not the optimal outcome, since there are many outcomes which are clearly superior to it, for both players. The catch is, nobody ever claimed that a Nash equilibrium must necessarily lead to an optimal outcome, and this game is proof that it doesn’t.
I agree. There are different situations with different standards of what’s rational.
I mentioned in a previous post a game where you had a choice between two options. One where you got $1000 and your opponent got $2000; the other you got $100 and your opponent got $50. I think most people would feel the rational play in this case would be to take the thousand dollars. You’d rather have a $1000 instead of $100 and you don’t really care that much what your opponent gets. Your absolute gain is more important than your relative gain.
But suppose there’s a different situation. You’re the manager of a baseball team. In an upcoming game you have a choice of two strategies. One will score you ten runs while allowing the opposing team to score fifteen runs; the other will score you two runs while allowing the opposing team to only score one. In this case, most people would feel the second strategy is the rational one. This time, your relative gain is more important than your absolute gain.
AGAIN, you don’t get to make up your own rules…In the OP, it stated very clearly
So, beating the other guy was more valuable than any amount of money you get. It isn’t MY goal. It was part of the question…
Isn’t it called a straw man, to disregard the OP, make up your own statement and argue THAT instead?