I didn’t say it was a dilemna, I only repeated what the original question was. Everyone else is arguing a different problem than what was asked…
What has been discussed for 2 pages is the ORIGINAL traveler’s dilemna, not what the OP is.
OP here. I can’t fault them. I got my information on what the Traveler’s Dilemma was based upon a reading of it on Wiki. Wiki was incorrect and it’s since been corrected by Indistinguishable. So my questions stemmed from an incorrect understanding of the TD and is no longer relevant.
My problem at this point is that I just don’t believe the TD is that well constructed of an economic argument. The argument is that the Nash Equilibrium will lead everyone to the conclusion that the correct game theory choice is $2. But that’s just not true It’s setting up a real world situation where you’ve lost your luggage and you’re trying to get reimbursed for it. Why would you say it’s worth $2? What do you have to gain from that?
Sure, giving a value of $2 is the only 100% certain way to get money. Who cares? If I’m 99% certain the other player will pick $99 or more, I’d be a fool to choose anything but 98 or above.
So the other guy picks 99 and I’m trying to maximize my winnings. Even if I picked 99 and, in retrospect realized I should have picked 98, I can take great comfort in the fact that I didn’t do something ridiculous and choose $2, which “maximizes” me at a grand $4 and leaves me $95 shy of my “sub-par” bid of 99.
If I’m actually in this situation in real life and the other guy says $2 because it’s “proper game theory,” I’m going to slap him. Then I’m going to ask if his luggage he just lost was worth $4. “Well no, but I wanted to ensure I could get something” he’d respond whereupon I’d slap him a second time.
The Prisoner’s dilemma is really interesting and has practical applications.
The TD is just stupid.
Now it is different and I will stick with my post 67 where I addressed absolute numbers:
With such a huge penalty/small reward, there is no reason to try for the extra 2 dollars if it will cost you at least whatever lower amount a person says.
I like the Prisoner’s Dilemma also.
similar to this thread about a dollar auction.
Sure, but this only applies to the case where you are 99% certain the other player will pick $99 or more. That may well describe the world as it is right now, but it doesn’t necessarily describe the world as it always will be.
If you were 99% certain the other player will pick $99, and you were trying to maximize your winnings, why didn’t you pick $98?
It’s not about ensuring you can get something.
If you know the other guy will pick $2, what are you going to pick? Not $100, not $99, and not $98, right? You’ll pick $2 yourself. It will be the right move in that case.
The right move to make depends on what your opponent’s move will be.
The/A story motivating interest in the Nash equilibrium is this:
The game is played many times.
Everyone starts out picking $100.
One enterprising individual realizes: “Wait, if I pick $99, I’ll make more money against all these opponents playing $100 than if I pick $100 as well (or anything else)”.
This strategy is so successful against all these $100-picking players that it begins to spread. Soon, almost everyone is picking $99.
One enterprising individual realizes “Wait, if I pick $98, I’ll make more money against all these opponents playing $99 than if I pick $99 as well (or anything else).”
The strategy is so successful against all these $99-picking players that it begins to spread. Soon, almost everyone is picking $98.
And so on, all the way down until almost everyone is picking $2. At which point everyone realizes “Wait, if I pick any value other than $2, I’ll make less money against all these opponents playing $2 than if I continue to pick $2 as well”. The population becomes evolutionarily stable and no strategy-mutation will spread.
Now, that story doesn’t describe everything. It doesn’t describe a world where people find a way to draw up effective agreement contracts, for example. But it does describe something interesting which we could witness in certain scenarios, which is not just a glib “Well, I want to ensure I get something, no matter what”. If you think the Nash equilibrium is just about “Well, I want to ensure I get something, no matter what”, then you’ve not yet understood the idea.
(Between many anonymous players, in this Darwinian story, mutating occasionally (randomly, if you like, or via intelligent strategization, if you like) and reproducing in accordance with their earnings, in case that wasn’t clear)
The Nash equilibrium is the same even if the reward is $100M.
Imagine playing that game. How many people would pick $2?
But as an equilibrium, iit makes sense. I like Indistinguishable’s evolution analogy.
But imagine, if there was a recessive mutation to choose a high number. There could eventually be a sizeable minority of high-guessers, and with random pairs for the game, a fairly high number of truly serious outperformers. So maybe the Nash equilibrium wouldn’t be quite so stable? Yeah, I know, it’s beside the point.
Actually, the Nash equilibrium of the standard Traveller’s Dilemma is $100.
The solution goes as follows:
a) The two players are identical, therefore:
b) The two players will ALWAYS bid the same value, and they know this fact.
c) The highest payoff for identical bids occurs for the maximum bid.
The fundamental problem is the a bid pair of ($100, $99) is as nonsensical as a bid pair of (faster-than-light neutrinos, blue), albeit less obviously so.
A Nash equilibrium does not depend on the players being identical, and it certainly can’t depend on the players knowing they’re identical. To find the Nash equilibrium, each player must consider every move the other player might make. Since both players can make a move other than $100, each player must consider the possibility that the other player might make such a move.
That may be how Nash equilibrium is defined, but it’s not the most rational choice, assuming one wants to maximize one’s gains.
If A picks $2 and B picks $100, A gets $4 and B gets $0.
A will feel like an idiot for not picking $100.
B will think A is a jerk.
Total winnings: $4
If both pick $2, they both get $2.
They will both think they are both jerks.
Total winnings: $4
If both pick $100, they both get $100.
Total winnings: $200
This is the best outcome for the both of them.
If A picks $99 and B picks $100, A gets $101 and B gets $97.
Total winnings: $198
A cost B $3 to gain $1 for himself.
A may think he’s clever, but B will think he’s a jerk.
If A picks $98 and B picks $100, A gets $100 and B gets $96.
Total winnings: $196
A cost B $4 to gain $0 for himself.
A may think he’s clever, but B will think he’s a jerk.
If you pick $97 or less, you will definitely make less than if both picked $100. You will feel a little bit like a jerk.
In order for the two of them to get $200 between them, they both have to pick $100. Any pick below that is throwing money away.
The only way you can make $101 is if the other guy picks $100 and you pick $99. If both travelers know this, they will each pick $99 and be out a dollar. Better to pick $100 and possibly get $97 than to try to go under $99. There is no reason to settle for $2 if you can help it, and the only way you can help it is to pick as high as you can.
The most rational choice is for both travelers to pick $100, which requires oneself to pick $100.
To conform with game-puzzle rules, you’d have to assign a dollar cost to “being thought a jerk” before taking it into consideration.
But you’re not wrong. Nash Equilibrium is contrary to Kant’s Categorical Imperative Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.
The problem is: It is Nash’s model (adjusted by thought-a-jerk penalties), not Kant’s, which most closely simulates behavior of players in a marketplace.
The Nash equilibrium works if the real result is simply win-lose, and you’re just playing for more points. Like a game, where the win counts, not the point total.
But given a meaningful difference between making $4 and $97, especially if there is any opportunity for collusion at all–not so much.
But it is interesting.
Again, if you’re going to say that the Nash equilibrium “works”, or “doesn’t work”, you have to define what you mean by those terms. It does exactly what it was designed to do. All that’s left is to complain that that’s not what it should have been designed to do.
The standard statement of the Traveller’s Dilemma involves 2 perfect players in identical situations. If you want non-perfect players, you are going to have to detail the way in which they deviate from perfect play.
Assuming perfect play, let Player A submit a bid of X. Because both A and B are perfect, and the problem is symmetric, then A knows that B’s bid is also X. Searching the space of equal bids it trivial: the equilibrium is $100/$100. At no point should the bid pair $100/$99 be considered: it cannot happen. Saying that it should be considered is as silly a statement as demanding that the bid pair pecan pie/toffee should be considered.
This only breaks down if the players aren’t perfect. Even then, it mostly holds if the players say “I’m not sure what he’ll bid, but we are both normal, reasonable people, so whatever happens the other guy’ll bid something pretty close to what I bid.”
I mean what I said. It’s wrong. The entire concept of game theory is that it’s supposed to model what rational actors do. The Nash equilibrium was invented specifically because other models were producing inaccurate results. It was only after the fact, when errors started to come up, that people started saying that the Nash equilibrium wasn’t really supposed to tell you anything. It’s an after-the-fact rationalization of an incorrect theory. People who deviate from what the rules say are deemed irrational, rather than the idea that maybe the premises are wrong.
Of course the Nash equilibrium actually says that $2 is the best bid. That’s a mathematically defined concept. I didn’t say that $2 was wrong* for the Nash equilibrium. I said that the Nash equilibrium was wrong, because it did not actually model what two rational players would do. A rational player by definition tries to maximize their outcome. That’s what makes them rational.
It may be useful in situations where it produces accurate results, like Newton’s theories are useful in many situations, even though Einstein’s are more accurate. But that doesn’t mean that the Newtonian equations aren’t wrong. And it doesn’t mean that the Nash equilibrium isn’t wrong.
I asked Indistinguishable to tell me what use the Nash equilibrium has when it produces inaccurate results, and got no answer. Do you have one?
*That’s assuming the new post by Kraydak isn’t actually showing that the Nash equilibrium has been misapplied. I interpret his post as saying that Nash is wrong, just like I am, if for different reasons.
OK, now define a behavior for a rational individual that does maximize his profit. For extra credit, make sure that it continues to maximize his profit even when he’s surrounded by other rational individuals.
Story mode:
Player A: Hmm, $100, $100 sounds good. I get $100.
Player A: But wait, I could bid $99! But if I did that, then Player B, also a perfect (or rational, same difference) player in the same situation, will also bid $99. We’d both get $99. Hmmm, $99<$100. I guess $100 it is.
Non-story mode:
Player A: Hmm, I’m exactly the same as Player B, and we are both perfect/rational. That means that, whatever happens, I’ll get the same final payout as Player B. The highest payout in that circumstance is 100. Done.
I admit to having mispoke a bit earlier. Nash equilibrium strictly defined doesn’t even apply to the Traveller’s Dilemma, because it requires a player to be able to change his bid unilaterally. Fortunately/unfortunately, symmetry+perfect play means that you cannot change your bid unilaterally.
This is true of all symmetric+perfect play games, btw: Nash equilibrium is non-applicable. Instead, in those game perfect players know that they will get the average reward, so they always choose the option that gives the highest average reward. Perfectly selfish behavior leading to maximum social benefit. Win! But also boring.
No, don’t assume that your opponent is using the same strategy as you. Assume that he is using a particular strategy. As in, “Bidding $100 is the perfect strategy. But if my opponents are bidding $100, and I bid $99, then I will do better than if I bid $100. So if I assume that my opponents will bid $100, then I will bid $99.”
But then he isn’t a rational player.
In a symmetric game, perfect play alway means that everyone arrives at the same answer (they don’t have to arrive at it for the same reason, but they do have to arrive at it). If two players give different answers in a symmetric game, then at least one of them must be wrong (i.e. not perfect). Surely you agree with this?
In the TD, we have two rational players who believe that their co-player is also rational (if he is insane, who knows what the right strategy should be). Therefore they know that they have to give the same answer. Therefore they know that ($100,$99) is a bid pair that can never happen (regardless of the payoffs). Considering a bid pair that will never happen is not rational.
I imagine Indistinguishable didn’t answer because other posters touched on the same question elsewhere. But it’s an important question, so it can be addressed again directly.
Every theory of human behavior and rationality is wrong. All of them, no exceptions.
A perfectly correct theory of human behavior and rationality would necessarily mean that we have a set of equations that can simulate human-like decisions in any conceivable context. This means that a perfectly correct theory of human behavior is an artificial intelligence in its own right, at least if it’s coded into a computer with enough horsepower to execute the program.
Game theory is wrong? Well, okay. Sure. But we’re not looking for a theory that avoids wrongness. That’s not reasonable at this point in time. We look for theories which are both simple enough that we can work with them, and which can also approximate rightness in certain contexts. They will never be right, they will always be wrong, but they can sneak up on rightness if we think very carefully. These contexts of rightness are necessarily limited. Sometimes the Nash equilibrium does a good job, and sometimes it doesn’t. What’s interesting about the Traveler’s Dilemma is that it is one clear situation where the Nash equilibrium can be terribly far from optimal. A more human, and less mechanical, decision-maker can often get better results.
The more human answer, though, might not always be optimum either. As Indistinguishable and Chronos described above, there are hypothetical contexts where the Nash equilibrium of the Traveler’s Dilemma might be the “right” solution. That is interesting, too. We have to tailor our tools to the situation. Even when the Nash equilibrium is wrong, it can still guide our thoughts and serve as a useful starting point. That’s why people use it.
Kraydak is mistaken.
The Nash equilibrium is $2. Indistinguishable is a mathematician. Chronos is a physicist. I personally won the most gold stars in my fifth grade class. We’re heavyweights on this topic. Or hell, if you don’t trust us, you can read the words from the person who actually devised the Traveller’s Dilemma. He says the same thing. He says two dollars is the “logical” choice.
But you have to understand, words like “logical” or “rational” are essentially meaningless in this context. They’re just another way of saying “Nash equilibrium”. They are not the ultimate expression of optimal play. The entire point of the Traveler’s Dilemma is that it’s one striking case where Nash doesn’t always work.
There are some other things in the post that look a bit like evidential decision making, but that’s another kettle of fish. The fact remains that $2 is the Nash equilibrium.