Two questions on the Traveler's Dilemma (statistics):

Yes, I was bored and reading through Wikipedia. What of it?

So you can read through the Traveler’s Dilemma at your leisure, but here’s the gist:

Two travelers have lost their two exactly identical pieces of luggage. Each are separated and told to list what the value of their luggage is, from a minimum of $2 up to $100. If they both put the same amount down, that’s what they get. But if one puts a lower amount down, they both get that lower amount with the lower choice getting a $2 bonus and the higher one getting a $2 subtraction from the payment. The key is that both players in this game value outwitting their opponent as more valuable than any money that could be obtained.

Ok. Now both could choose $100 and they’d both get $100. But what’s the fun in that? So one should choose $99 and “win” by getting $101. The other one knows this and should choose $98. The first one knows this and should choose $97. Eventually, you go through the rabbit hole and determine that to the “optimal” strategy is to say $2 because the other person can’t go any lower.
Or, as Wiki puts it “Remarkably, and, to many, counter-intuitively, the traveler’s optimum choice (in terms of Nash equilibrium) is in fact $2; that is, the traveler values the antiques at the airline manager’s minimum allowed price.”

But here’s my first question. How is that winning? It may be not losing. I mean, congrats! You tied! But if you haven’t outwitted your opponent, which was the prime function of this game.
Also, they tried to experiment with this game. As Wiki states: However, when the game is played experimentally, most participants select the value $100 or a value close to $100, including both those who have not thought through the logic of the decision and those who understand themselves to be making a non-rational choice…These experiments (and others, such as focal points) show that the majority of people do not use purely rational strategies, but the strategies they do use are demonstrably optimal.

So my second question is this: how is choosing to maximize your own output irrational or suboptimal? I know I’m fighting the hypothetical here but telling people that the need to win by just the tinniest amount so outweighs the need for money they’d give up $100 just to get $2 or $4 runs counter to so many people’s thinking, I don’t know how you could run this experiment .

Well, consider it this way, if you both say $2, then no, you don’t beat the other guy. But, on the other hand, if he’s going to say two bucks, then you’re not going to be able to beat him. All you can do is prevent him from beating you, by also saying two bucks. And if he’s not about to say two bucks, then you can beat him by saying $2

On the other hand, if you don’t consider beating the other guy as more important than the money you get, then it becomes a non-zero-sum game, and the equilibrium is probably with each person saying $100 and getting $100.

And yes, these people who are ‘playing non-rationally’ in experiments are probably playing rationally to a different objective than the one stated in the dilemma - they want to either maximize the money they get, or combine an expectation of high money with what they think is a reasonable chance of ‘outwitting’ the other player.

I deliberately put the word ‘outwitting’ in quotes, as this is not really a test of wit in my opinion - it’s more like a game of financial chicken, with whoever dares to sacrifice his financial long-term gain more ‘winning.’

I’d just put down $100 and hope the other guy wasn’t an idiot.

Anyway, I don’t see what’s so rational about choosing a MAX benefit of $4 over a possible $100.

This is what it comes down to for me. I’d probably go one level deep and say $99. If I win, I get the benefit of beating the other dude AND I get more than my $100 guess. Win win.

My feeling is that there are situations when people can be so heavily invested in winning, they lose site of all rationality. But this situation isn’t it and even instructing participants to pretend that it is runs so counter to logic, it’s no wonder so many people chose what’s “non-rational” based upon the experiment even though it probably was the correct decision.

Right. Let’s say it’s a 50/50 chance the other guy goes $100 or $2. If I say $100 my expectation is E = 0.5 x 100 + 0.5 x 0 = $50. If I say $2 my E = 0.5 x 4 + 0.5 x 2 = $3. If that’s not enough to convince me to say $100, I’ll take a 50/50 chance for $100 over a guarantied $2 any day.

This is where they lose me. You can tell me this, and put me in the experiment, but it doesn’t mean I’m going to play by those rules.

If, in practice, most people will choose a value of $100 or close to $100, then the way to maximise your gain is to choose $100 or close to $100. If you chose $2, then you would “win” in the sense of getting more than the other player, but you would loose the chance of getting $100 or something close to that like $97. Once you realise that the other player should be thinking the same way, then you should choose $100 or something like $99.

Don’t look at it as “The key is that both players in this game value outwitting their opponent as more valuable than any money that could be obtained.” That’s a misleading way of putting it.

Just think of the players as motivated purely by their own financial rewards, and not caring at all about how the other player does. The result is the same, in that both players choosing $2 is the only Nash equilibrium (i.e., situation in which neither player would want to unilaterally change their strategy after finding out the other player’s strategy).

So the game should be
Pick a number between 1-100. If you have the same number you both get to live. If you have a smaller number you get $1,000,000 tax-free and if you have the higher number you get shot in the head.

That would get you the solution you are looking for.

No, no, no. The article was worded terribly; ignore the whole “The key is that both players in this game value outwitting their opponent as more valuable than any money that could be obtained.” That’s a misleading, bordering on erroneous, way of putting it.

Just think of the players as motivated purely by their own financial rewards, and you’ll get the same result. Both players choosing $2 is the only Nash equilibrium (i.e., situation in which neither player would want to unilaterally change their strategy after finding out the other player’s strategy).

Yeah, I’d come to the conclusion that that line about ‘outwitting is more valuable’ was a stray Wikipedia edit and not consistent with the rest of the discussion on that page.

Now I’m wondering how best to construct a randomizing strategy that would give you the best odds of a payoff even if the other player knows your strategy (but not what random choice you’re going to make based on it.)

Or will the random factor fall out and choosing $100 all the time, (thus guaranteeing yourself a minimum of $97) be more optimal?

So much so that I’ve removed it from the Wikipedia article, to which it was a very late, isolated addition, by an anonymous user who likely didn’t quite understand what they were talking about.

Can’t stop thinking about this. :wink:

It seems to me that you actually maximize your own chance of making money by letting the other guy know what you’re doing; otherwise he could psych himself into picking a really low number and torpedo-ing you. But if you tell him “I’m writing down $100, you can do the same or you can ninja me and leave me with $97, I don’t really care,” and he believes you, he has no reason to go lower than $99

The cited article has

I hope one of you Wikipedia editors deletes that line. It’s at best highly misleading and irrelevant, and close to nonsensical, IMO.

To answer OP’s question #2 … You’re not wrong! By being charitable and assuming the other player might be as well, two humans can do better than the Nash equilibrium. This should come as no surprise to humans of good heart, though it will be thought impossible by right-wingers of the Dog-eat-Dog Market-is-God religion.

It’s interesting to note that Traveler’s Dilemma (which I don’t think is too different from Prisoner’s Dilemma) was invented by Kaushik Basu, now the Chief Economist of the World Bank. He also is noted for a paper “Why, for a class of Bribes, the act of Giving Bribes should be treated as Legal.”

ETA:
@ Indistinguishable – good fix for Wiki.
@ chrisk – communicating with a fellow human in person makes it too easy! Do a random act of kindness and hope a stranger also has good heart!

I think Indistinguishable took care of editing the Wikipedia article.

And in my opinion, it doesn’t take ‘charity’ to beat the Nash equilibrium, just a willingness to share information, to not obsess over beating the other guy, and to not second-guess a big benefit for the sake of a small possible gain.

ETA: sometimes I think it’s more reasonable to hope that people will be selfish but communicative than randomly kind. :wink:

If choosing $100 empirically gets you more money than choosing $2, then it is the rational thing to do. Fuck Nash, we know about the prisoner’s dilemma.

A Nash equilibrium only makes sense if each player knows the decision of the other player. However, the premise of this game is that you don’t know what the other player will decide before you make your decision.

If you take ignorance to the extreme: suppose that the other player has an equal chance (i.e., 1/99) of choosing each number 2 through to 100. Then if you choose 100 (or in fact any number between 93 and 100) your expected return is a bit over $49 – and if you choose 2 your expected return is a bit under $4. So you should choose a high number: even though it’s likely that the other player will get $4 more than you, you are maximising your expected value. It really only makes sense to choose 2 if you want to beat the other player regardless of your own return.

This means that the absolute value we end with is unimportant and counts for naught. Only the RELATIVE value that I end up with vs what you end up with counts.

It is not demonstrably optimal. If someone chooses 100 and I choose 2, then I get 4 and they get 0. I beat them by 4, so I win.

Because in the beginning, you said

and are unable to turn loose of the absolute numbers.

So, that means someone deleted the most important part of the whole puzzle.

Both players choosing $2 is optimum in this sense: It’s the only pair of strategies which are each optimal counter-strategies to each other. You are free to make of that what significance you will.

Proof: Note that, if my opponent picks a value <= L, then it’s of no benefit to me to pick L if I can pick L - 1 instead (it can only make a difference if my opponent happens to pick L or L - 1, and in either of those cases, I’m better off with L - 1, considering the $2 mismatch bonus/penalty). So if L is the largest value my opponent ever picks, I should never pick a value larger than L - 1, if I can avoid it.

That is, unless my opponent’s strategy is to always pick the minimum value of 2, the largest value I ever pick in my counter-strategy should be smaller than the largest value they ever pick.

But if L1 and L2 are the largest values player 1 and player 2 ever pick, we can’t have simultaneously L1 < L2 and L2 < L1. Thus, the only way to have a Nash equilibrium is for both players to always pick the minimum value of 2.

No, I deleted the most misguided part of the Wikipedia article, whose presence has caused a number of people to misunderstand and misappreciate the actual puzzle. It should never have been there in the first place. The puzzle is about exactly what you would think it was about: people who care about how much money they have, not whether they “outwitted” the other person or not.

I would also disagree with the notion that this is an extension of the prisoners dilemma. In that case snitching is always the optimal strategy even though it leads to a worse outcome if both people follow it. In this case the optimal strategy depends on you opponents strategy. If you opponent bids 3 dollars than 2 is optimum, but if your opponent bids $90 then $2 is no longer optimum.

I think if anything this just highlights the weakness of the Nash Equilibrium. Which in this case basically comes down to saying, given that the optimal strategy when faced with an idiotic opponent is to be idiotic, the overall optimal strategy is to be an idiot.