Envelopes with money (a new conundrum)

SSittser writes:

“That is, it is inescapable that there is a 50% chance of a randomly-chosen envelope being the “twice as much” envelope”

This is the nub of the problem, and I agree that it appears plausible. However, consider two points:

  1. The assumption that switching is always 50-50 leads to a contradiction. It would imply that one would get a better result by always choosing to switch envelopes.

On the other hand, it is also clear that the average amount of money in the envelope opened first must be the same as the average amount of money in the other envelope. So, if we always switch, then we should wind up the same on average as if we never switched.

  1. Your probabilities depend on a judgment made by an intelligent human being. The puzzle does not tell us how the donor made his decision. E.g., suppose that he started with two sets of two envelopes. One pair had $2 and $4, the other pair had $4 and $8. He chose one pair at random and gave it to you. In this case, when your first envelope is shown to have $4, it is indeed 50-50 that the other envelope has either $2 or $8. If you switch, your expected value is $5, so you are better off switching.

But, suppose the donor chose a random pair of envelope from three pairs of envelopes, two with $2 and $4 and one with $4 and $8. Now, when your first envelope shows up with $4, the odds are 2 to 1 that the other envelope has $2 rather than $8. In this case, your expected value by switching is $4, which equals what you have if you keep the first envelope. You would be neutral about switching.

Try a third possibility. Suppose the donor selected a random pair of envelopes from 4 pairs, 3 of which had $2 and $4 and one of which had $4 and $8. Now, if $4 appears, the odds are 3 to 1 that the other envelope has $2. You would lose by switching.

The point is that your decision depends on the procedure that the donor used to create the envelopes originally, and you simply do not know how he did it.

The reason I said that this problem could not be tested is the same reason. In order to set up a test, you would need to know (or assume) what procedure the donor used to decide how much money to put into the envelopes originally.

SSittser writes:

“That is, it is inescapable that there is a 50% chance of a randomly-chosen envelope being the “twice as much” envelope”

This is the nub of the problem, and I agree that it appears plausible. However, consider two points:

  1. The assumption that switching is always 50-50 leads to a contradiction. It would imply that one would get a better result by always choosing to switch envelopes.

On the other hand, it is also clear that the average amount of money in the envelope opened first must be the same as the average amount of money in the other envelope. So, if we always switch, then we should wind up the same on average as if we never switched.

  1. Your probabilities depend on a judgment made by an intelligent human being. The puzzle does not tell us how the donor made his decision. E.g., suppose that he started with two sets of two envelopes. One pair had $2 and $4, the other pair had $4 and $8. He chose one pair at random and gave it to you. In this case, when your first envelope is shown to have $4, it is indeed 50-50 that the other envelope has either $2 or $8. If you switch, your expected value is $5, so you are better off switching.

But, suppose the donor chose a random pair of envelopes from three pairs of envelopes, two with $2 and $4 and one with $4 and $8. Now, when your first envelope shows up with $4, the odds are 2 to 1 that the other envelope has $2 rather than $8. In this case, your expected value by switching is $4, which equals what you have if you keep the first envelope. You would be neutral about switching.

Try a third possibility. Suppose the donor selected a random pair of envelopes from 4 pairs, 3 of which had $2 and $4 and one of which had $4 and $8. Now, if $4 appears, the odds are 3 to 1 that the other envelope has $2. You would lose by switching.

The point is that your decision depends on the procedure that the donor used to create the envelopes originally, and you have not been told how he decided to do it.

This problem cannot be tested or computer-simulated for the same reason. In order to set up a test, you would need to know (or assume) how the donor decided how much money to put into the envelopes.

Let’s say we had a thousand sets of these envelopes. Now the thousand envelopes with the higher amount of money has twice as much money in them and as the thousand envelopes with the lower amount. (Notice I am not saying that you can distinguish the sets.) Therefore the “bad” envelopes contain on average $X and the “good” envelopes contain $2X. If you picked a bad envelope and switch, you will make an additonal $X on average. If you picked a good envelope and switch, you’ll be losing $X on average. That’s why switching balances out as you would expect. On the other hand, if you were just offered the chance to flip a fair coin to decide whether to double or halve your money, then you should definitely go for it.

This was a good problem though. The paradox really had me stymied for awhile.

You know that one envelope has twice as much money as the other. You pick an envelope, look in it, and there’s $4 in it. There is a 50% chance that the other envelope has $2, and a 50% chance it has $8. Yes, your expected result is $5. However, all of this tells you that originally there was a 50% chance of there being $6 in the two envelopes, and a 50% chance of $12, with an expected result of $9 between the two, or an “expected” large envelope of $6 and an “expected” small envelope of $3, for an “average” envelope of 4.5. You've already lost .50.

The problem here is that everyone is trying to compare the values as 2x and 1/2x. Really, you should be looking at x vs. x+x. You don’t know if you have x dollars or 2x dollars. By traiding, you have a 50% chance of getting an additional x dollars, and a 50% chance of losing x dollars. Net value, 0.

If you don’t believe me, set up a spreadsheet. First colum has a random number from 1-100. Second column has twice the first. The third column randomly picks from the two. The fourth column takes whichever one the first one doesn’t. Make a few hundred rows of this, totaling up the third and fourth columns. Compare the two. Should be within a couple percentage points of each other. Save the results. Re-calculate. Save the results. Re-calculate a few hundered more times. Save the results. Total up the totals in the results. You should be within a few hundred thousands of a percentage between the two.

Congratulations! You just did your first Actuarial work - a Monte Carlo simulation.


“Outside of a dog, a book is a man’s best friend.
Inside of a dog, it’s too dark to read.”

Groucho Marx

A riddle from childhood:

Question: What has two horns, four wheels, says “Moo”, and gives milk?

Answer: A cow. I lied about the wheels.

The point is that we must generally rely on the conditions stated in a problem when we try to solve the problem. The cow joke is funny (well, at least I think it’s funny) because it’s an “unfair” question that violates that assumption - that’s what makes it a joke, rather than a true riddle or problem-solving exercise.

I believe december is adding theoretical conditions that are not in the original problem. This creates interesting, new problems (I particularly like the twist about choosing envelope pairs from sets of pairs), but they are not the original problem.

Essentially, you can create new problems by saying “but what if <new condition>?”. For example:

  • But what if you knew how much money the envelope-stuffer had to begin with? (Then, by looking at the amount in the envelope I was given, I could figure out which I had.)

  • But what if the envelope giver threatened to shoot you if you lose? (Can I give both envelopes back? The only winning move is not to play…)

  • But what if the money was all in pennies, and you were allowed to weigh both envelopes? (Obviously an easy win.)

  • But what if you had played this game with the envelope giver 100 times before, and found that 85% of the time the giver handed you the “win” envelope? (You might reasonably be inclined to never trade, in hopes that this is a bias of the giver that’s likely to continue.)

Etc. Giving me more information than the original problem contained (such as more information about how the envelopes were chosen, and out of what pool of possible envelopes) changes both the problem and the solution.

So, to solve the original problem it’s not necessary to say, “but what if I knew more about how the envelopes were chosen?” You don’t know. You’re not told. You’re handed an envelope. What do you do?

Of course in “real life” the problem’s assumptions (such as random selection) might not apply. The classic story “The Lady or the Tiger” (by Frank Stockton) is similar to the envelope problem, with the crux of the story being that the selection is most definitely NOT random.

PTVroman writes: “You know that one envelope has twice as much money as the other. You pick an envelope, look in it, and there’s $4 in it. There is a 50% chance that the other envelope has $2, and a 50% chance it has $8.”

SSittser writes: "So, to solve the original problem it’s not necessary to say,
‘but what if I knew more about how the envelopes were chosen?’ You don’t know. You’re not told. You’re handed an envelope. What do you do?’

There are two key factors to the odds of switching –
a. the donor decided how much to put in the envelopes
b. you chose a random envelope.

We know that the donor put $2 and $4 in the envelopes or $4 and $8. We are not told how he made this decision. Can we focus on (b) only and conclude that the odds are 50-50? Or must we say that the odds cannot be determined unless we know something about (a)? Is it OK to ignore a relevant factor because we lack information about it?

Try a parallel problem: What are the odds that the 1999 Super Bowl champions will repeat in Y2K, given that 20% of past Super Bowl champions have won in the following year? If you were ignorant of football and didn’t know that John Elway had retired from the Denver Broncos, your answer might be 20%. Ono a college exam, this might be the correct answer to the problem as stated. Maybe we are supposed to assume that the propounder of the problem has implicitly guaranteed that all unmentioned factors should be ignored.

However, you would be foolish to bet real money if your analysis had omitted some relevant factors. You’d save money by admitting that you don’t know enough to set proper odds.

This brings us to a deep philosophical question: How do you define the concept of probability, when addressing a unique, non-repeatable event? For an answer, see “Foundations of Probability” by L. Savage.

P.S. – I have difficulty sending messages. The first effort fails to go through, then the second effort brings both copis of the message. Any suggestions?

Shit! When I tried to reply, I must have mistyped my password. When I hit “Back,” it erased my entire message! Now I have to type it all over again. :frowning:

This is a really interesting thread. I think CK and Needa did a great job explaining the paradox. I think I finally see december’s point. At first, I thought SSitter was right that december was being deliberately obscurant by ignoring what was clearly an unspoken assumption in the problem, that all factors not mentioned are deemed irrelevant. (I still do, but now I see she has a point.) That’s why, in the liars and truth-tellers riddle, for example, you cant just ask, “What color is my shirt.” December’s point, though, is that the envelope problem is even more abstracted from reality than it seems. In reality, we couldn’t assume that just because we don’t know everything, that only the factors we’re given affect the outcome. The Broncos example was excelent. The only problem is, in reality, nearly every event is unique and unrepeatable. According to chaos theory, every current in the atmosphere has some slight affect on whether the next coin I toss will land heads or tails (not to mention the chance that someone handed me a double-headed coin). Even though we assume that the affects will balance out in the long run, the chances of this particular coin landing heads up is probably not 50%. Or is it. Even with the Superbowl, if you bet on the previous champion at 5:1 odds every year, you’d likely break even. After all, the condition of every player affects the game, but it also affected all the previous games which led to the 20% probability. So in a way, probability theory accounts for all the variation over several events, provided the constants are all accounted for. So if you assume that there is no factor of human nature that would predispose every person to give a set amount, and you assume a set limit, like tubby did, then you could in fact form a prediction for a completely unknown donator. Am I right, here?

What did Savage say, december? How can you tell when probability theory applies to a situation? Are all probabilities really empirical? (We know that every time someone has counted the results of tossing an equally weighted coin, the numbers converged on one another, therefor, the probability must be 50%?) Help me out; this is really confusing!

PS–December, I can definitly sympathize with browser problems! :wink: Your last post didn’t double, so maybe you’ve got it sorted out. It’s probably ( ;)) your browser showing you a cached image the first time. Instead of resending, just hit “Refresh” next time your post doesn’t show up, and see if it updates it for you.

I can’t believe nobody picked up on this one:

RickG observes that:

While this is true, it doesn’t answer the question. Which box would you choose, given the information that you have? One has to make certain assumptions about what constitutes reasonable behavior on the part of the mint, but I venture to say that only in highly unlikely circumstances would the full box be worth less than the half-full box. In most scenarios it would be worth more. Given the most reasonable assumptions of all, we are talking about a choice between a full box of gold and a half full box. How tough a choice is that?

Alan Smithee wrote: “I thought SSitter was right that december was being deliberately obscurant by ignoring what was clearly an unspoken assumption in the problem, that all factors not mentioned are deemed irrelevant.”

Cecil Adams discusses another problem and includes in his answer the words:
“Which box would you choose, given the information that you have?”

In the 2 envelope problem there are two relevant factors, the donor’s decision of how much money to put in the envelopes, and our randomised choice of an envelope. Following the suggestions above, we would like to answer the problem “given the information that you have.” That is, following the “unspoken assumption in the problem that all factors not mentioned are deemed irrelevant.”

The key to this problem, I believe, is that it is mathematically impossible for the donor’s choice to be irrelevant. Here’s why:

Let’s assume that the donor chose a method of putting cash in the envelopes so that his decision is irrelevant. Specifically, we assume that the donor assigned amounts of money to the envelopes so that after you chose an envelope and opened it, it would always be 50-50 as to whether the other envelope had half that amount or twice that amount, regardless of how much money you found in the envelope.

We will show that this assumption leads to a contradiction. There is no mechanism leading to this result! That’s why the apparent paradox (described in an earlier post) exists.

To put this assumption in Bayesian terms, we are looking for a prior distribution of the original envelopes such that after you choose an envelope at random and open it, it will be 50-50 as to whether the amount in the other envelope is larger or smaller.
I am asserting that no such prior distribution exists.

To see why, suppose the envelope you opened had $4. Your assumption says that it should be equally probable as to whether the other envelope has $2 or $8. That happens if you were equally likely to be given (2,4) or (4,8) (where the figures in parentheses indicate the amount of money in the two envelopes). In Bayesian terms, the prior probabilities of (2,4) and (4,8) must have been equal.

For example, you could assume that the donor prepared just two sets of envelopes, (2,4) and (4,8), and chose one of the pairs at random. Then, when you see $4 in the envelope you opened, it would indeed be 50-50 as to whether the other envelope has $2 or $8. So far so good.

However, remember we want our assumption to hold regardless of how much money you found in the envelope. This is where the problem begins.

Once you assume that the donor chose from (2,4) and (4,8), then he might have given you the (4,8) and you might have opened the $8 envelope. So, you must also assume that the prior distribution had equal probabilities for (8,16) and (4,8). E.g., you might assume that the donor chose randomly from 3 sets of envelopes, (2,4), (4,8) and (8,16).

However, by similar reasoning, there would need to be equal probabilities for (16,32), (32,64), etc. Even ignoring the limitation on amounts of money, this argument leads to the conclusion that in the prior distribution a countably infinite number of possibilities all would need equal probability, call it P.
But, if P > 0, then the total probability would be infinity, whereas it must be one.
And if P = 0, then the total probability equals 0, which is also a contradiction.

This contradiction shows that you cannot satisfy the assumption that after you opened your chosen envelope it would always be 50-50 as to whether the other envelope has half that amount or twice that amount,
regardless of how much money you found in the envelope.

I must admit that you could satisfy the above assumption substituting the word “sometimes” for “always.” You could assume, e.g., that the donor chose randomly from (2,4) and (4,8). When you opened the envelope with $4, it was 50-50 as to whether the other envelope had $2 or $8.
But, if you had opened an envelope with $2 or $8, then the odds would NOT have been 50-50.

IMHO, this assumption destroys any interest in the problem. I believe that the problem as stated wanted to include the assumption, above, but it cannot be fulfilled.

Finally, I must confess that my “proof” above is not written with perfect mathematical rigor, but I do believe that it is correct.

P.S. It was asked what L. Savage said.
He discussed how probability could be defined when the frequentist approach did not apply, say, because one was considering a unique, one-time event rather than repeated trials. One key point of his brilliant book is that a judgmental probability can be consistently defined
as personal opinion. The definition of this probability could be deduced in theory from which side a person would take in a series of bets. A person’s subjective probability would obey the normal mathematical laws of probability.

However, there would be no contradiction if two different people assigned different
probabilities to the same event.

These answers have been fascinating. I am still forming an opinion as to which one is likely right, but I do have some definite views on which are just red herrings. Let’s knock out a few of them and focus back in on the main point.

1. Red herring 1: The odds aren’t actually 50/50

This is a red herring, because you are assuming information not in the problem.

If I flip a coin and hide it from you, the chances are in reality 100% that it is heads and 0% that it is tails (or vice versa), not 50% each way. However, if I were to ask you odds, or to place a bet, and to repeat the experiment, you would have to calculate a 50% probability of either case. Of course, there is more information to be known (if you peeked, for example), which would certainly change the odds.

To bring it back to the envelopes, yes there is certainly other information you could try to extract that would change the odds. However, the point of the problem as stated is that you only know a single fact: one envelope has twice the money as the other one.

2. Red Herring 2: It doesn’t matter if you switch because the expected value of the first envelope is not x, it is 1.5x

Actually, this isn’t really a red herring as I originally stated the problem – it does address the problem of just picking up an envelope then switching it.

However, if you open the envelope (the equivalent of half-peeking), then the $ in the envelope become the base you are comparing to.

Now that you know the value of the envelope is X (whatever it is), then the value of the other envelope really is a 50% chance of being half that (0.5X) and a 50% chance of being double that (2X), which leads us back to the expected value of 1.25X.

Red Herring 3: This is somehow related to gold coins in a box

Imagine my pleasure at reading Cecil had responded to my thread. Then imagine my disappointment at finding out he responded to a tangent from 2 weeks ago. Sigh.

My bet on the right answer:

I’m betting PTVRoman is right, but I’m not sure why it doesn’t match the expected value equation. When you first pick the envelope, you either picked x or (x+x). You don’t know which.

Now there are two possible cases. Either you picked $x (50%), and by switching, you will add $x.

In the second case, you picked $(x+x) (50%) and by switching, you will subtract $x. This is the right equation to use, clearly, since it gives the right odds (and Ssitter, I think this is where you were going with your “gambling odds” argument).

*Key Remaining Question: *

However, this still doesn’t explain why the expected value function doesn’t seem to work properly in the case. Something about how the equation as stated doesn’t accurately reflect the problem, but I’m not quite sure what. It seems straightforward: if $x is the amount in the envelope, then you have either a 50% shot at 0.5x and a 50% shot at 2x, which does work out to 1.25x. What’s wrong with the construction?

P.S.: December, that is an interesting point about 50%/50% chances not being possible because then all pairs must have been possible.

The implications of that don’t make sense, though. Again, just knowing that one envelope has twice the money of the other envelope, the chances may not be 50%/50%, but they’ve got to be pretty close. The reason is this: you are saying that if you receive an envelope with $8 in it, you would be less likely to switch than if you received an envelope with $4 in it, because the $8 is somehow marginally closer to the “upper limit”. However, considering that we don’t know the upper limit and we can assume if we received a reasonable amount of money ($4 and not 5 trillion, for example), that the chances are still pretty close to 50/50. Our estimate of the chances of the second envelope being higher would go down the closer the first envelope came to our estimate of "the limit", but at low values, our estimates wouldn’t deviate significantly enough from 50% to change the answer. (i.e., even at 60%/40%, you should still switch).

Thus, even if we grant your notion that we should take into account external factors, they don’t change the problem enough to resolve the apparent paradox.

Tubby writes: " Again, just knowing that one envelope has twice the money of the other envelope, the chances may not be 50%/50%, but they’ve got to be pretty close."

Tubby’s probability assessment is fine, as far as it goes. In fact, there would be no mathematical inconsistency if he believes that when $4 comes up in envelope #1, then the odds of $2 or $8 in envelope #2 are exactly 50-50.

However, what happens when Tubby considers ALL the possible amounts of money that might show up in the envelope #1? It is possible to expand his beliefs into a complete set of probabilities, which would cover every amount of money that might be found in envelope #1. Infinitely many systems of probabilities for these envelopes could be created, consistent with the laws of probability.

However, one cannot expand into a system of probabilities in which the chances would ALWAYS be 50-50, regardless of how much money is found in the envelope #1. No such system of numbers exists that satisfy the laws of probability. This is GOOD, because it reconciles the apparent contradiction.

The apparent paradox of this problem says that:

  1. You should always switch, since it’s 50-50 and you have larger amount to gain than to lose.
  2. It’s irrelevant to switch, since Envelope #2 is no different from Envelope #1; you selected one at random.

There’s no contradiction because it’s NOT ALWAYS 50-50. That would be mathematically impossible.

Or, put it another way. Suppose you see $X in envelope #1. If your subjective probability for $X is 50-50, then you should switch. But suppose your subjective probability corresponding to $X is that odds are greater than 2 to 1 that envelope #2 has less money than envelope #1 (perhaps because $X was a very large amount of money). Then you shouldn’t switch. But, in the latter case, it’s no longer true that the two envelopes are identical.

BOTTOM LINE – SHOULD YOU SWITCH? Is there a right answer? Are there “true” odds that switching envelopes will produce a greater amount or lesser amount of money? First of all, these probabilities are not objectively measurable quantities, like mass or length. At best, we can describe our own subjective probabilities (see prior post). Each of us may have different subjective probabilities.

So, is there some “natural” set of subjective probabilities? I’d say not, for two reasons.

There’s no mathematical answer. The only “natural” system of probability beliefs would be that the odds are always 50-50 regardless of the amount of money in envelope #1. Since this is mathematically impossible, there’s no other canonical way to say what the system of probability beliefs ought to be.

There’s no real world answer, either. The Monte Hall problem is an abstraction of a real-world situation, but the two envelope problem isn’t. In the real world, people haven’t given away money this way. So, there’s no real world natural odds either.

First, just for my ego’s sake, I want to point out that I posted the same solution four hours before PVTRoman, according to the time stamps anyway. Second, I’m surprised that confusion has persisted after my and PVTRoman’s solutions should have cleared everything up. I blame the appearance of “Bayesian” arguments, which have as much place in discussing the science of probability as a shaman does on the board of the AMA. The fact is that as long as you are choosing the envelopes randomly, and not being influenced by anyone who has knowledge, of what’s inside them, switching or not switching will be irrelevant. On the average, when you choose a good envelope it will contain twice as much money as when you pick a bad envelope. This is true without the need for you to make psychological diagnoses of the donor’s “upper limits”. Therefore, on average, you are risking the same amount by switching or not switching. I admit that you might get confused by looking at a single trial, but you have to look at a whole series to make any meaningful statement about probability.

Mr. Charles, my apologies, you were in fact several hours early with the right way to think about the problem.

However, I’m not entirely certain I agree with the analogy between Bayesian expected value analysis and shamanism. It’s simply a mathematical way of calculating chains of events. I’m just not sure why it falls apart here (and theoretically, it shouldn’t, so clearly there is some trick in the problem’s construction which we have as yet been unable to tease out).

Finally, december, as far as I can tell, your answer is “when faced with any situation involving money, the answer is unknowable”. Given that you know ahead of time that one envelope contains twice the money of the other one, I am not sure how you can claim that chances aren’t exactly 50% that you picked the lower one as opposed to the higher one. The information about the amount of money in the envelope shouldn’t change your estimate of those chances if, as you claim, that is actually no new information given the unknown set of possible envelope pairs.

I think we can conclude that we know officially know how the problem should be approached (the x and x+x model discussed by Greg Charles, PTVRoman, and Ssiter). Now, how can we tease out the false construction in the expected value equation?

For decades a “civil war” has raged between Bayesian and Classical statisticians. Judging from Greg Charles’s latest post, he may be a Classical statistician. I confess to being a Bayesian.

Tubby raises a good point. When does one have enough information to be willing to risk one’s money? As a reinsurance actuary, my job is to bet the company’s money on various random events. One part of the job is to create a model that forecasts a profit. Another part is to decide whether the model is reliable enough. People sometimes have different opinions, depending on which facts they focused on and how they analysed them. In the short run, a decision may be a matter of opinion. However, in the long run, some companies go bankrupt while other prosper. So far, we’re in the latter group.

To get a flavor of how to evaluate the validity of a model, try the following two multiple choice questions:

Question #1: You’re in a gambling casino and offered a chance to play a new game. All you know about the game is that if you bet $100 you will either lose $50 or win an additional $100? Should you play?

A. You should play the game, because the amount you could win is twice the amount you could lose. Since you know nothing about the odds, you can assume that they’re 50-50.

B. You shouldn’t play. You should assume that the casino has arranged the odds so that they’re in the house’s favor.

C. You shouldn’t play because you don’t know enough.
Question #2: There is a business opportunity where your potential gain is twice your potential loss. Should you do the deal?

A. Do it. You have more to gain than to lose.

B. Don’t do it. Someone else may know more than you do. The deal may favor that person.

C. Don’t do it. You don’t know enough.

My answers to both #1 and #2 would be B or C.

So when SHOULD you risk money on a deal? In my opinion, the time to risk money is when you are the expert, who knows more than the next person.

Not to be contrarian, but I still cast my vote with Holger and those who agree with him.

Hr identified a strategy which works under a broad range of reasonable probability distributions of amounts in the envelope pairs: Pick a number that you feel is somewhere near the high end of your estimate of the offerer’s budget. If the first envelope contains more than that, keep it; if less, switch.

We’ve mainly discussed the case where S (the amount in the smaller envelope) is uniformly distributed between 1 and 100. Of course, you don’t know the actual probability distribution. But Holger’s strategy works in this case, and also for other reasonable probability distributions, described below.

To demonstrate this for the uniform distribution case, I would propose modifying PTVRomman’s excellent suggestion of a spreadsheet as follows:

Add a cell containing the actual containing the offerer’s actual top value for S (say 100);

Add a cell containing your estimate of the top value of S (initially 100);

Change the final column formula to keep the original amount if the value of X (the amount in the first envelope) is greater than your estimate of the top value of S, otherwise switch.

If you run this trial repeatedly, you’ll see that this strategy increases your payment over the long run.

But what if your estimate was bad? Change your estimate of S(max) to 10. You’re still better off, as you are if you estimate S(max) = 150. If you estimate S(max) to be 200 or more, you wind up switching all the time, so the advantage disappears.

So, tubby, you were right to be troubled by the expected value issue.

The probability distribution could also be more like a sweepstakes model. Say your local gambling den offers you an envelope of casino bux as a thankyou for spending an evening enduring a presentation on the pleasures of visiting their establishment. But they’ve actually prepared envelope pairs containing $S & $2S & offer each attendee the privilege of switching. Should you?

You might reasonably guess that the distribution of S is loaded up with smaller amounts, with the probability thinning out as S increases. In this case, the conditional probability that X = 2S is actually > 0.5 (i.e., the frequency of (.5X, X) envelope pairs is greater than the frequency of (X, 2X) pairs). But Holger’s strategy still works, except that your criterion amount should be set somewhere below your S(max) estimate, depending on the steepness of the distribution.

To address this problem from the point of view of “ideal” logic or “ideal probability,” the error that has been being made is in trying to quantify the chance of getting a better envelope by switching. (In a “real world” scenario, all that stuff about the motivations of the game creator, estimated limits of the possible payout as a way to add information to the game, etc. do apply. But I want to talk about the “pure” game.)

Essentially, all the information about numbers and chronology is a red herring. This game can be restated as follows: “Here are two envelopes. The first one has some money. The second one has some money. Pick one envelope or pick the other.” Everything about ratios, counting the money in one envelope or not counting it, switching envelopes rather than just picking one or the other, etc. is INSIGNIFICANT information - it does not affect the game. (In the real world, of course it applies. You count the money in the first envelope, if you’re happy you leave.)

Look at it this way - suppose you could choose to play one of two variants of this game, one where the ratio was 2x and the other where it was 50x. Does that “change” the odds of getting the envelope with more money? You have calculated that the “expected value of the second envelope is 1.25M [=(.5M+2M)/2]” in the first game - but in the second game isn’t it 25.01M? Obviously that’s ridiculous - in either game you have the same chance of picking the envelope with the greater amount of ducats.

Suppose we added MORE information as follows: in the first game, the envelopes contain $45 or $90, in the second they contain $2 or $100. Would anyone choose to play the second game simply because they have almost 20 times the chance (25x vs. 1.25x) of improving their payout by changing envelopes under its rules? I bet most of you would still play the first game - and your calculation would be based on “real world” calculations involving your level of satisfaction with the proposed payouts, regardless of the “odds.”

BTW, by the same logic, showing the contestant one of the remaining two doors adds no relevant information in the Monty Hall problem on the other thread.

"BTW, by the same logic, showing the contestant one of the remaining two doors adds no relevant information in the Monty Hall problem on the other thread. "
~Sayeth David Forster

You should switch. That’s all I’ve to say.


“All I say here is by way of discourse and nothing by the way of advice. I should not speak so boldly if it were my due to be believed.” ~ Montaigne

Suppose the envelope you opened had $8. You now know that the pair of envelopes originally contained 4&8 or 8&16. Consider two pairs of events:

A. Before an envelope was opened, the pair of envelopes originally contained 4&8 or they contained 8&16.

B. After an envelope was opened and $8 was seen, the other envelope contains $4 or it contains $16.

DavidForster wants to make a decision about B without considering A. However, he is effectively making an assumption on A, because the probabilities of the two cases in A turn out to be equal to the probabilities of the two cases in B. (See PROOF below.)

After seeing $8, DavidForster assumed that the chances of the other envelope having $4 or having $16 were 50-50. (This can be seen because he used the 50-50 probabilities to calculate his expected value.) Therefore, he implicitly assumed that the original probabilities were also 50-50.

If the probability of 4&8 is p, the expected value of switching is p*4 + (1-p)*16. This expression will be greater than 8 whenever p is less than 2/3.

In summary:

  1. If you think that the original probability of 4&8 was less than 2/3, then you should switch envelopes.
  2. If you think that the original probability of 4&8 was greater than 2/3, then you shouldn’t switch.
  3. To make a sensible decision, you should not ignore the original probabilities.

PROOF using Bayes Rule:
Suppose p was the original probability that the envelopes contain 4&8 and (1-p) was the probability that they contained 8&16. After an envelope is randomly selected and $8 is seen, the conditional probability of $4 in the other envelope is

      p*.5/[p*.5 + (1-p)*.5] = p

ARGH!!! Good LORD!

A Cecil sighting and only one person commented?!! NOBODY NOTICED?!
Where was I? Dangit. Which way did he go?!

Btw… he was the only one who answered my riddle correctly. My life is complete.