Envelopes with money (a new conundrum)

Okay, here’s another one that’s been bugging me for ages.

There are two envelopes with cash. One envelope has twice the amount of cash as the other envelope.

You are given one of the envelopes. You have the option of keeping it (with the money) or trading it for the other envelope.

What should you do?

As far as I can tell, you should trade. The other envelope has a 50% chance of being 1/2 the money and a 50% chance of being twice the money. Thus if I opened up my envelope and it contained $4, then I would have a 50% chance of $2 and a 50% chance of $8, for an expected value of $5 for switching. That’s greater than my current money, so I should switch.

But that would lead me to pick up one envelope, look at the money, put it down, pick up the other one and walk away. Why is that a better strategy than just staying with the first envelope?

Let’s say one envelope contains $X and the other contains $2X. You don’t know what X is. Before you pick an envelope, your expected value is .5X +.5(2X) = 1.5X. This is of course silly, you’ll never get 1.5X, you’ll only get X or 2X, but the expected value is the mean.

So, now say you have one envelope. You rightly calculate that the expected value if you switch is 1.5X. But note that the expected value if you DON’T switch is also 1.5X (that is, there’s a 50% chance that you have the high one, and a 50% chance that you have the low one.)

In short, the expected value ON THAT ONE TRIAL is always higher than whatever you have, whether you switch or not.

What that really means is that, over many trials, you will probably get the high one half the time and the low one half the time, thus the expected value of the mean.

The seeming paradox arises because you are applying the expected value to a single trial… you have to think of it as the expected results after many, many trials.

That clearer?

I think misstated the paradox. If you don’t look at the contents of the first envelope, it doesn’t matter whether you switch - the expected value of the first envelope is 1.5X, which is also the expected value of the other envelope.

It’s more interesting if you count the money in the first envelope before deciding whether to switch. In that case, you are able to recalculate the expected values. If M is the amount of money in the first envelope, then the expected value of the first envelope is M (obviously), while the expected value of the second envelope is

(0.5M + 2M)/2

which equals 1.25M

But that seems counterintuitive to me…am I overlooking something?

I think the bug is in the probabilities. Different amounts of money do NOT have equal probabilities of being in an envelope.

My point is that we’re lacking some context. Whoever put that money into the envelopes must have a certain limited budget to begin with. I’ve seen this riddle in the scenario of a game show where you’re allowed to choose from two prize envelopes, and while game shows do continually increase their prize money, they are very unlikely to give you, say, a billion dollars. If I found ten million in my envelope, I’d expect the other to contain five rather than twenty million.

What it boils down to is that, depending on the context, there should be some long-term (over many shows) or common-sense expected value independent of X. And if your X is lower than that, you should switch; otherwise, you shouldn’t.

Holger

Good point about if I don’t open the envelope – so let’s assume I do look at the envelope first, then try to decide whether to switch.

While I understand the point about the game show prizes, I’m not sure I agree in this case. That explanation sounds similar to Cecil’s “but the game show host might want to fool you” answer.

Let’s assume you know for a fact that the limit is $100 and you get an envelope with $4 in it. Has that really told you anything about whether you should switch or not? Are you really more likely to be holding a the higher amount just because $4 is closer to $100 than $2 is?

The $100 does put an interesting twist on it, however, since if the picks were truly random up to $50 and $100 in each envelope, then anytime I picked anything over $50, I wouldn’t switch, so it would be a pretty bad deal for the game show.

That said, you didn’t pick $51, you picked $4. The question remains: what should you do?

Let’s restate the problem to remove some red herrings. Since you make your first pick blindly, you aren’t really “choosing” an envelope on your first pick - the envelopes are indistinguishable.

So instead, let’s say that someone (Monty Hall, maybe) offers to play a game with you: He will flip a coin - fairly, of course - and if it comes up heads he will give you $20. If it comes up tails, you must give hime $10. Do you want to play that game?

Yes you do, because on average you will make $5. This is equivalent to finding $20 in the first envelope - the second envelope either contains $20 more than the first, or $10 less.

BTW, My doubts about my earlier post have been removed :slight_smile:

Let’s try this from a different perspective.
The problem is that your perception of the probabilities and the actual probabilities are different.

OK, suppose the host puts $4 in one envelope and $8 in the other. Suppose he is doing this 1,000 times, with 1,000 different contestants simultaneously, who cannot communicate with each other, and who do not know the amounts in the envelopes. Each contestant picks one of the two envelopes presented to him/her.

The expected value is $6… that is, about half the people will pick the $4 envelope and half will pick the $8 envelope. No one contestant will actually win $6, but that’s the expected value on average over a large number of trials.

And that’s the reality: if X is the lower amount, the expected value is 1.5X.

Now, suppose the contestant is allowed to open the envelope and switch if he wants. The fact is that the contestant gets no new information by having opened the first envelope. He doesn’t know whether the other envelope contains $8 or $2. The reality: the expected value is still $6.

That’s what makes this problem different from the three-door problem, where the contestant receives new information by being shown what’s behind the second door.

OK, so the contestant, not knowing what amounts are in the envelopes, calculates as Manduck does. Say his envelope contains $4; he calculates 50% chance of $2 and 50% chance of $8, so expected value of $5. This is an erroneous calculation, because the chance of $2 is in fact zero, but the contestant doesn’t know that. Alternately, if his envelope contains $8, he calculates the expected value as 50% chance of $4 and 50% chance of $16, thus $10. Again, he doesn’t know that there is zero chance of $16.

So he calculates an expected value of 1.25 x the amount in his envelope.

Does that mean he should switch? No. It doesn’t matter. His expected value is the same whether he switches or not. The fact that the expected value of the game is slightly higher than what you hold in your hand does NOT, in itself, mean there is an advantage to switching.

Two points:
(a) By not knowing the amounts, the contestant cannot correctly calculate the expected value. He can calculate what he thinks the expected value might be, which is still slightly more than what he finds in the envelope.

(b) The expected value of the outcome is exactly the same, whether he switches envelopes or keeps the one he has. The fact that the expected value appears to be higher than what he sees in the envelope is a red herring.

Manduck’s coin toss example would better be expressed: Suppose that, while the coin is in the air, you can call for a switch of the rules, reversing the heads/tails results. Does that affect your odds? Answer: not at all. The expected value is still mean of the outcomes.

CDK Yes you are right. The doubts I expressed in my first post were well-founded, and I feel kinda stupid now.

The way I understand the problem now is:

There is a 50% chance you chose the ‘right’ envelope with your first pick. Nothing you do afterwards can change that probability, so the other envelope can only represent the other 50%. This makes it the same as the 3-door problem, i.e. the is a 1/3 change that you picked the right door initially, so the other 2/3 belongs to the last remaining door after the host opens the loser door.

tubby wrote:

Under your assumption of equal probabilities, no. My point is: In game shows, probabilities are not equal. You can expect both very small and very large sums to be unlikely because shows where no-one ever wins would lose their viewers. Thus, if you really held $4, you should switch. You may still lose in the specific case, but in the long run, the strategy will be beneficial. (Though you probably wouldn’t be invited frequently enough to even try a “long run”.) Of course, our model is now heavily customized towards game shows, but where else would you ever be handed two envelopes of money to choose from?

CK’s point of the contestant not knowing the real probabilities is an additional problem that I hadn’t mentioned. Without ANY information, a smart decision is simply impossible. I was merely referring to the probabilities missing in our model, but assumed that if we model them, the contestant can use them.

Holger

Here’s another similar one:
You have a choice of two identical boxes which measure 12 X 12 X 12 inches. Which would you rather have?
A. Box #1, which is full of 10-dollar gold pieces, or
B. Box #2, which is half full of 20-dollar gold pieces.

Depends. Does a well-packed pile of the $20 pieces take up twice the volume of the same number of $10-pieces? If so, obviously take the full box. If the coins are of identical size, it doesn’t matter which box you take. Intermediate cases need to be treated carefully.

Rick

I understand the case where when presented with two envelopes, one with $X and one with $2X, choosing one will give you 1.5X “probabilistic dollars.” That is, if you were presented with a million opportunities to play this game (hence two million envelopes, one million with $X and one million with $2X) you’d end up with $1.5X almost exactly. Essentially, your decision to swap doesn’t make any difference.
The second case is where you correctly assume that by counting the money in your envelope, the probability of the other envelope containing $0.5X or $2X is equal and 50/50. You then incorrectly assume 1.25X “probabilistic dollars.” This game would be one were you are given an envelope with $X in it, then presented with the option to either take $X or flip a coin (a fair coin, for all you sticklers) and if it’s heads, take $0.5X and if it’s tails take $2X. In that case, you would be correct in assuming 1.25X “probabilistic dollars” by choosing the coin flip but only $X by not choosing the coin flip.
The thing you forget in the incorrect case is that you give up your “current” envelope in the exchange, so although it’s true you could get $0.5X or $2X by trading, you would be giving up either $2X or $0.5X in the trade, so you’d end up with $1.5X in either case.
Since this last example leaves us with real dollars instead of “probabilistic dollars,” it’s better to think in that way because you will actually get $1.5X. :wink:

As a person working in the field of probability, I would say that The Incredible Holg has the right handle on the analysis. One key is that “probability” is not an automatic property of a situation; it can only be defined relative to a set of assumptions. Furthermore, these assumptions must be consistent. It’s not enough to simply assume that one envelope has twice as much money as the other. You need to start with some “a priori” probilities of the likelihood of various amounts of money being in the envelopes, before eithr one is opened. You can then calculate new probabilites based on additional information gained by opening one envelope. This is called a "Bayesian analysis.

If this were a real situation, you might be able to get some reasonable (subjective) a priori probabilities from your knowledge of who put the money in the envelopes and why and how they did it. After all, people don’t just give us envelopes full of money. However, if you were unable to make any assumtion of the probability of various amounts prior to opening an envelope, then there is simply no formal answer to the problem.

As a person working in the field of probability, I would say that The Incredible Holg has the right handle on the analysis. One key is that “probability” is not an automatic property of a situation; it can only be defined relative to a set of assumptions, which must be consistent.

It’s not enough to simply assume that one envelope has twice as much money as the other. A crucial fact is that someone put money in the envelopes and gave them to you, using some sort of basis. You need to start with some assumed “a priori” probabilities of the likelihood of various amounts of money being in the envelopes. You can then calculate new (“a posteriori”)probabilites based on the additional information gained by opening one envelope. This is called a “Bayesian analysis.”

If this were a real situation, you might be able to get some reasonable (subjective) a priori probabilities from your knowledge of who put the money in the envelopes and why and how they did it. After all, people don’t just give us envelopes full of money (unless we’re politicians.)

However, if you were unable to make any assumtion of the probability of various amounts prior to opening an envelope, then there is simply no formal answer to the problem.

Holger:

tubby:

You should switch. In this model, you did gain information by counting the money; you learned that it was well under your a priori limit beyond which you shouldn’t switch.

Say there were lots of trials, where a number X between 1 & 50 (or some larger number) was randomly chosen, then envelopes with $X & $2X prepared & offered; one is selected. Of the trials where a $4 envelope is selected, you can expect the other envelope to have $2 half the time and $8 the other half of the time

Holger:

tubby:

You should switch. In this model, you did gain information by counting the money; you learned that it was well under your a priori limit beyond which you shouldn’t switch.

Say there were lots of trials, where a number X between 1 & 50 (or some larger number) was randomly chosen, then envelopes with $X & $2X prepared & offered; one is selected. Of the trials where a $4 envelope is selected, you can expect the other envelope to have $2 half the time and $8 the other half of the time

I think the original problem might be better understood in terms of gambling odds, rather than all those potential probablistic dollars that it’s so hard to get them to accept down at K-mart. It’s just another way of looking at the same statistics.

You’re given an envelope with $4 in it. Great, now you can forget about how you got it, and ignore the first envelope completely, because “history” is irrelevent to the odds. You’ve got $4.

You’re shown an envelope. It’s got a 50% chance of having $2 in it, and a 50% chance of $8. If you take it, and it’s the $2 envelope, then you’ve got $2 - a loss of $2 from your original $4. If it’s the $8 envelope, you’ve got $8 - a gain of $4.

So, essentially, you’ve been given the opportunity to bet $2 on a 50% chance of winning $4 (and a 50% chance of losing the $2). That is, a “win” that happens half the time pays you twice your bet. Thus, there’s no statistical advantage to taking or not taking the envelope.

Unfortunately, SSiter is begging the question in assuming:

“You’re shown an envelope. It’s got a 50% chance of having $2 in it, and a 50% chance of $8.”

The essence of this problem is, what are the correct odds of $2 or $8 in the envelope? One cannot simply assume that unknown probabilities are 50-50. In the absense of ANY information, there does not seem to be any right answer.

In the real world, you would always have SOME information. E.g., What do you know about the person who told you about the envelopes? What reason was provided for this odd way of giving money? Do you fully believe the person who told you? It may be that you could create some subjective probabilities based on such considerations.

In forming these probability judgments, different people might reach different conclusions. This is comparable to asking, “What is the probability that the Yankees will win the 1999 World Series?” You and I might judge that different odds were appropriate. Presumably we would have used different reasoning or focused on different facts.

The envelope problem is particularly intractible because there is no normal real-world situation like this. I mean, people don’t give us a choice of envelopes with $2 or $8. So, it’s unclear how to interpret the information provided. Furthermore, there’s no way to test any answer. By comparison, gambling odds are validated by the success of casinos.

In the absense of an assumption of what the odds are or how the person chose to put money into the envelopes, I continue to believe that there is no answer. Or, to put it another way, one can get ANY answer by assuming some process by which the money was originally put into the envelopes.

I believe december is making this more complex than it really is. The original problem states:

That is, it is inescapable that there is a 50% chance of a randomly-chosen envelope being the “twice as much” envelope.

The only point I can see that could possibly be considered debatable is whether the envelope given to you is chosen at random, or chosen in a non-random fashion by someone who knows which envelope contains what. Since the problem is clearly presented as a logical/mathmatical puzzle, I think random selection is clearly implied. Otherwise, we are left with the pointless task of attempting to psychoanalyze the “envelope giver”, who is in no way identified or discussed in the problem.

The original problem goes on to suggest that:

That is, the mathmatical probabilities are as expected. There is no hint of any attempt to turn this into a psychoanalytical exercise.

So I wasn’t begging the question, but rather restating the original terms of the problem we’re considering.

But if we’re going to change to the psychoanalytical question: Well, then if my mom gives me the envelope, I keep it, and if Bill Clinton gives me the envelope, I trade.

I don’t even want to think about the “Yes, but he’d probably suspect you’d trade, so he’d give you the “win” envelope, but then he knew you’d suspect that he would suspect you’d trade, so he’d give you the “lose” envelope, but he knew that you’d know that he’d know that you’d suspect, so…” question. That was handled sufficiently in “The Princess Bride”.

december also states that:

Statements that are provable mathmatically or logically don’t need to be “tested”. However, repeated trials are useful for reassuring us that we haven’t done the math wrong - they back up our deduction with induction. For example, we don’t need to draw and measure a bunch of triangles to prove that the Pythagorean Theorem is true, but doing so may make us feel better about the proof. In the case at hand, it would be a simple matter to run a few trials (or a few million on a computer) and see that envelope traders and envelope keepers end up with the same amount of money in the long run.

It’s worth pointing out that there are two different sorts of gambling odds: those that are purely mathmatical, and those that are judgement calls. Odds for a coin toss, a slot machine, a roulette wheel, a lottery, and our envelope problem are of the first sort. They are 100% predictable (assuming that no one is “cheating” - in effect, violating the assumptions of randomness upon which the odds are based). Odds for a horse race, or any athletic contest, or weather (“30% chance of rain”), or the “psychoanalyze the envelope giver” problem are of the second sort. They are very dependant on the analytical skills of the odds-maker, and different people could legitimately come up with different odds.

One last point: logic problems commonly bear a rather tortured relationship to reality (“People from tribe A always tell the truth, but those from tribe B always lie…”, “Given a straight line on an infinite, flat plane…”, etc.), as a way to explicitly limit the problem to questions of logic. The solutions are, of course, applicable to “real life” only if you keep in mind how well, or poorly, the assumptions of the problem fit the actual situations encountered. That doesn’t make them any less “correct” on their own terms.

SSittser comment about doing a real-world experiment made me realize something: sometimes designing the experiment makes things clearer, particularly because it forces you to make definitive assumptions.

In this particular case, let’s say the envelopes are limited to $1 and $2 so we can do repeated tests. I assume one of the envelopes is presented to “you” at random, hence we flip a coin to determine which of the envelopes you get. Hence, on average, you will get $1.50 handed to you ($1 * 1/2 + $2 * 1/2 = $1.50).

The corrolary to this is that the “provider” had a total of $3 for each run of the experiment, on average they’ll hand out $1.50, so they keep $1.50 on average. This is the key part–if you swap at random or not, you’ll average $1.50.

Now, if you figure out and/or know for sure the $1 is the smaller of the two envelopes, you’ll always swap and be better off. Any additional information is useful, so any time you have better than 50/50 odds of knowing which is the greater dollar amount, you’ll do better than $1.50 on average.

Ok, I’m satisfied with this now. :slight_smile: