PDA

View Full Version : Envelopes with money (a new conundrum)

05-18-1999, 06:39 PM
Okay, here's another one that's been bugging me for ages.

There are two envelopes with cash. One envelope has twice the amount of cash as the other envelope.

You are given one of the envelopes. You have the option of keeping it (with the money) or trading it for the other envelope.

What should you do?

As far as I can tell, you should trade. The other envelope has a 50% chance of being 1/2 the money and a 50% chance of being twice the money. Thus if I opened up my envelope and it contained \$4, then I would have a 50% chance of \$2 and a 50% chance of \$8, for an expected value of \$5 for switching. That's greater than my current money, so I should switch.

But that would lead me to pick up one envelope, look at the money, put it down, pick up the other one and walk away. Why is that a better strategy than just staying with the first envelope?

05-18-1999, 08:09 PM
Let's say one envelope contains \$X and the other contains \$2X. You don't know what X is. Before you pick an envelope, your expected value is .5X +.5(2X) = 1.5X. This is of course silly, you'll never get 1.5X, you'll only get X or 2X, but the expected value is the mean.

So, now say you have one envelope. You rightly calculate that the expected value if you switch is 1.5X. But note that the expected value if you DON'T switch is also 1.5X (that is, there's a 50% chance that you have the high one, and a 50% chance that you have the low one.)

In short, the expected value ON THAT ONE TRIAL is always higher than whatever you have, whether you switch or not.

What that really means is that, over many trials, you will probably get the high one half the time and the low one half the time, thus the expected value of the mean.

The seeming paradox arises because you are applying the expected value to a single trial... you have to think of it as the expected results after many, many trials.

That clearer?

05-18-1999, 09:20 PM
I think misstated the paradox. If you don't look at the contents of the first envelope, it doesn't matter whether you switch - the expected value of the first envelope is 1.5X, which is also the expected value of the other envelope.

It's more interesting if you count the money in the first envelope before deciding whether to switch. In that case, you are able to recalculate the expected values. If M is the amount of money in the first envelope, then the expected value of the first envelope is M (obviously), while the expected value of the second envelope is

(0.5M + 2M)/2

which equals 1.25M

But that seems counterintuitive to me...am I overlooking something?

05-19-1999, 09:12 AM
I think the bug is in the probabilities. Different amounts of money do NOT have equal probabilities of being in an envelope.

My point is that we're lacking some context. Whoever put that money into the envelopes must have a certain limited budget to begin with. I've seen this riddle in the scenario of a game show where you're allowed to choose from two prize envelopes, and while game shows do continually increase their prize money, they are very unlikely to give you, say, a billion dollars. If I found ten million in my envelope, I'd expect the other to contain five rather than twenty million.

What it boils down to is that, depending on the context, there should be some long-term (over many shows) or common-sense expected value independent of X. And if your X is lower than that, you should switch; otherwise, you shouldn't.

Holger

05-19-1999, 03:42 PM
Good point about if I don't open the envelope -- so let's assume I do look at the envelope first, then try to decide whether to switch.

While I understand the point about the game show prizes, I'm not sure I agree in this case. That explanation sounds similar to Cecil's "but the game show host might want to fool you" answer.

Let's assume you know for a fact that the limit is \$100 and you get an envelope with \$4 in it. Has that really told you anything about whether you should switch or not? Are you really more likely to be holding a the higher amount just because \$4 is closer to \$100 than \$2 is?

The \$100 does put an interesting twist on it, however, since if the picks were truly random up to \$50 and \$100 in each envelope, then anytime I picked anything over \$50, I wouldn't switch, so it would be a pretty bad deal for the game show.

That said, you didn't pick \$51, you picked \$4. The question remains: what should you do?

05-19-1999, 04:23 PM
Let's restate the problem to remove some red herrings. Since you make your first pick blindly, you aren't really "choosing" an envelope on your first pick - the envelopes are indistinguishable.

So instead, let's say that someone (Monty Hall, maybe) offers to play a game with you: He will flip a coin - fairly, of course - and if it comes up heads he will give you \$20. If it comes up tails, you must give hime \$10. Do you want to play that game?

Yes you do, because on average you will make \$5. This is equivalent to finding \$20 in the first envelope - the second envelope either contains \$20 more than the first, or \$10 less.

BTW, My doubts about my earlier post have been removed :-)

05-19-1999, 05:16 PM
Let's try this from a different perspective.
The problem is that your perception of the probabilities and the actual probabilities are different.

OK, suppose the host puts \$4 in one envelope and \$8 in the other. Suppose he is doing this 1,000 times, with 1,000 different contestants simultaneously, who cannot communicate with each other, and who do not know the amounts in the envelopes. Each contestant picks one of the two envelopes presented to him/her.

The expected value is \$6... that is, about half the people will pick the \$4 envelope and half will pick the \$8 envelope. No one contestant will actually win \$6, but that's the expected value on average over a large number of trials.

And that's the reality: if X is the lower amount, the expected value is 1.5X.

Now, suppose the contestant is allowed to open the envelope and switch if he wants. The fact is that the contestant gets no new information by having opened the first envelope. He doesn't know whether the other envelope contains \$8 or \$2. The reality: the expected value is still \$6.

That's what makes this problem different from the three-door problem, where the contestant receives new information by being shown what's behind the second door.

OK, so the contestant, not knowing what amounts are in the envelopes, calculates as Manduck does. Say his envelope contains \$4; he calculates 50% chance of \$2 and 50% chance of \$8, so expected value of \$5. This is an erroneous calculation, because the chance of \$2 is in fact zero, but the contestant doesn't know that. Alternately, if his envelope contains \$8, he calculates the expected value as 50% chance of \$4 and 50% chance of \$16, thus \$10. Again, he doesn't know that there is zero chance of \$16.

So he calculates an expected value of 1.25 x the amount in his envelope.

Does that mean he should switch? No. It doesn't matter. His expected value is the same whether he switches or not. The fact that the expected value of the game is slightly higher than what you hold in your hand does NOT, in itself, mean there is an advantage to switching.

Two points:
(a) By not knowing the amounts, the contestant cannot correctly calculate the expected value. He can calculate what he thinks the expected value might be, which is still slightly more than what he finds in the envelope.

(b) The expected value of the outcome is exactly the same, whether he switches envelopes or keeps the one he has. The fact that the expected value appears to be higher than what he sees in the envelope is a red herring.

Manduck's coin toss example would better be expressed: Suppose that, while the coin is in the air, you can call for a switch of the rules, reversing the heads/tails results. Does that affect your odds? Answer: not at all. The expected value is still mean of the outcomes.

05-19-1999, 08:00 PM
CDK Yes you are right. The doubts I expressed in my first post were well-founded, and I feel kinda stupid now.

The way I understand the problem now is:

There is a 50% chance you chose the 'right' envelope with your first pick. Nothing you do afterwards can change that probability, so the other envelope can only represent the other 50%. This makes it the same as the 3-door problem, i.e. the is a 1/3 change that you picked the right door initially, so the other 2/3 belongs to the last remaining door after the host opens the loser door.

05-20-1999, 04:36 AM
tubby wrote:Let's assume you know for a fact that the limit is \$100 and you get an envelope with \$4 in it. Has that really told you anything about whether you should switch or not? Are you really more likely to be holding a the higher amount just because \$4 is closer to \$100 than \$2 is?Under your assumption of equal probabilities, no. My point is: In game shows, probabilities are not equal. You can expect both very small and very large sums to be unlikely because shows where no-one ever wins would lose their viewers. Thus, if you really held \$4, you should switch. You may still lose in the specific case, but in the long run, the strategy will be beneficial. (Though you probably wouldn't be invited frequently enough to even try a "long run".) Of course, our model is now heavily customized towards game shows, but where else would you ever be handed two envelopes of money to choose from?

CK's point of the contestant not knowing the real probabilities is an additional problem that I hadn't mentioned. Without ANY information, a smart decision is simply impossible. I was merely referring to the probabilities missing in our model, but assumed that if we model them, the contestant can use them.

Holger

05-21-1999, 09:40 AM
Here's another similar one:
You have a choice of two identical boxes which measure 12 X 12 X 12 inches. Which would you rather have?
A. Box #1, which is full of 10-dollar gold pieces, or
B. Box #2, which is half full of 20-dollar gold pieces.

05-21-1999, 01:01 PM
I understand the case where when presented with two envelopes, one with \$X and one with \$2X, choosing one will give you 1.5X "probabilistic dollars." That is, if you were presented with a million opportunities to play this game (hence two million envelopes, one million with \$X and one million with \$2X) you'd end up with \$1.5X almost exactly. Essentially, your decision to swap doesn't make any difference.

The second case is where you correctly assume that by counting the money in your envelope, the probability of the other envelope containing \$0.5X or \$2X is equal and 50/50. You then incorrectly assume 1.25X "probabilistic dollars." This game would be one were you are given an envelope with \$X in it, then presented with the option to either take \$X or flip a coin (a fair coin, for all you sticklers) and if it's heads, take \$0.5X and if it's tails take \$2X. In that case, you would be correct in assuming 1.25X "probabilistic dollars" by choosing the coin flip but only \$X by not choosing the coin flip.

The thing you forget in the incorrect case is that you give up your "current" envelope in the exchange, so although it's true you could get \$0.5X or \$2X by trading, you would be giving up either \$2X or \$0.5X in the trade, so you'd end up with \$1.5X in either case.

Since this last example leaves us with real dollars instead of "probabilistic dollars," it's better to think in that way because you will actually get \$1.5X. ;-)

05-21-1999, 03:49 PM
As a person working in the field of probability, I would say that The Incredible Holg has the right handle on the analysis. One key is that "probability" is not an automatic property of a situation; it can only be defined relative to a set of assumptions. Furthermore, these assumptions must be consistent. It's not enough to simply assume that one envelope has twice as much money as the other. You need to start with some "a priori" probilities of the likelihood of various amounts of money being in the envelopes, before eithr one is opened. You can then calculate new probabilites based on additional information gained by opening one envelope. This is called a "Bayesian analysis.

If this were a real situation, you might be able to get some reasonable (subjective) a priori probabilities from your knowledge of who put the money in the envelopes and why and how they did it. After all, people don't just give us envelopes full of money. However, if you were unable to make any assumtion of the probability of various amounts prior to opening an envelope, then there is simply no formal answer to the problem.

05-21-1999, 03:53 PM
As a person working in the field of probability, I would say that The Incredible Holg has the right handle on the analysis. One key is that "probability" is not an automatic property of a situation; it can only be defined relative to a set of assumptions, which must be consistent.

It's not enough to simply assume that one envelope has twice as much money as the other. A crucial fact is that someone put money in the envelopes and gave them to you, using some sort of basis. You need to start with some assumed "a priori" probabilities of the likelihood of various amounts of money being in the envelopes. You can then calculate new ("a posteriori")probabilites based on the additional information gained by opening one envelope. This is called a "Bayesian analysis."

If this were a real situation, you might be able to get some reasonable (subjective) a priori probabilities from your knowledge of who put the money in the envelopes and why and how they did it. After all, people don't just give us envelopes full of money (unless we're politicians.)

However, if you were unable to make any assumtion of the probability of various amounts prior to opening an envelope, then there is simply no formal answer to the problem.

05-21-1999, 05:22 PM
Holger:
What it boils down to is that, depending on the context, there should be some long-term (over many shows)
or common-sense expected value independent of X. And if your X is lower than that, you should switch;
otherwise, you shouldn't.
tubby:
That said, you didn't pick \$51, you picked \$4. The question remains: what should you do?

You should switch. In this model, you did gain information by counting the money; you learned that it was well under your a priori limit beyond which you shouldn't switch.

Say there were lots of trials, where a number X between 1 & 50 (or some larger number) was randomly chosen, then envelopes with \$X & \$2X prepared & offered; one is selected. Of the trials where a \$4 envelope is selected, you can expect the other envelope to have \$2 half the time and \$8 the other half of the time

05-21-1999, 05:26 PM
Holger:
What it boils down to is that, depending on the context, there should be some long-term (over many shows)
or common-sense expected value independent of X. And if your X is lower than that, you should switch;
otherwise, you shouldn't.
tubby:
That said, you didn't pick \$51, you picked \$4. The question remains: what should you do?

You should switch. In this model, you did gain information by counting the money; you learned that it was well under your a priori limit beyond which you shouldn't switch.

Say there were lots of trials, where a number X between 1 & 50 (or some larger number) was randomly chosen, then envelopes with \$X & \$2X prepared & offered; one is selected. Of the trials where a \$4 envelope is selected, you can expect the other envelope to have \$2 half the time and \$8 the other half of the time

05-22-1999, 12:14 AM
Here's another similar one:
You have a choice of two identical boxes which measure 12 X 12 X 12 inches. Which would you rather have?
A. Box #1, which is full of 10-dollar gold pieces, or
B. Box #2, which is half full of 20-dollar gold pieces.

Depends. Does a well-packed pile of the \$20 pieces take up twice the volume of the same number of \$10-pieces? If so, obviously take the full box. If the coins are of identical size, it doesn't matter which box you take. Intermediate cases need to be treated carefully.

Rick

05-24-1999, 07:34 PM
I think the original problem might be better understood in terms of gambling odds, rather than all those potential probablistic dollars that it's so hard to get them to accept down at K-mart. It's just another way of looking at the same statistics.

You're given an envelope with \$4 in it. Great, now you can forget about how you got it, and ignore the first envelope completely, because "history" is irrelevent to the odds. You've got \$4.

You're shown an envelope. It's got a 50% chance of having \$2 in it, and a 50% chance of \$8. If you take it, and it's the \$2 envelope, then you've got \$2 - a loss of \$2 from your original \$4. If it's the \$8 envelope, you've got \$8 - a gain of \$4.

So, essentially, you've been given the opportunity to bet \$2 on a 50% chance of winning \$4 (and a 50% chance of losing the \$2). That is, a "win" that happens half the time pays you twice your bet. Thus, there's no statistical advantage to taking or not taking the envelope.

05-25-1999, 08:34 AM
Unfortunately, SSiter is begging the question in assuming:

"You're shown an envelope. It's got a 50% chance of having \$2 in it, and a 50% chance of \$8."

The essence of this problem is, what are the correct odds of \$2 or \$8 in the envelope? One cannot simply assume that unknown probabilities are 50-50. In the absense of ANY information, there does not seem to be any right answer.

In the real world, you would always have SOME information. E.g., What do you know about the person who told you about the envelopes? What reason was provided for this odd way of giving money? Do you fully believe the person who told you? It may be that you could create some subjective probabilities based on such considerations.

In forming these probability judgments, different people might reach different conclusions. This is comparable to asking, "What is the probability that the Yankees will win the 1999 World Series?" You and I might judge that different odds were appropriate. Presumably we would have used different reasoning or focused on different facts.

The envelope problem is particularly intractible because there is no normal real-world situation like this. I mean, people don't give us a choice of envelopes with \$2 or \$8. So, it's unclear how to interpret the information provided. Furthermore, there's no way to test any answer. By comparison, gambling odds are validated by the success of casinos.

In the absense of an assumption of what the odds are or how the person chose to put money into the envelopes, I continue to believe that there is no answer. Or, to put it another way, one can get ANY answer by assuming some process by which the money was originally put into the envelopes.

05-25-1999, 06:44 PM
I believe december is making this more complex than it really is. The original problem states:

There are two envelopes with cash. One envelope has twice the amount of cash as the other envelope.

That is, it is inescapable that there is a 50% chance of a randomly-chosen envelope being the "twice as much" envelope.

You are given one of the envelopes.

The only point I can see that could possibly be considered debatable is whether the envelope given to you is chosen at random, or chosen in a non-random fashion by someone who knows which envelope contains what. Since the problem is clearly presented as a logical/mathmatical puzzle, I think random selection is clearly implied. Otherwise, we are left with the pointless task of attempting to psychoanalyze the "envelope giver", who is in no way identified or discussed in the problem.

The original problem goes on to suggest that:

The other envelope has a 50% chance of being 1/2 the money and a 50% chance of being twice the money.

That is, the mathmatical probabilities are as expected. There is no hint of any attempt to turn this into a psychoanalytical exercise.

So I wasn't begging the question, but rather restating the original terms of the problem we're considering.

But if we're going to change to the psychoanalytical question: Well, then if my mom gives me the envelope, I keep it, and if Bill Clinton gives me the envelope, I trade.

I don't even want to think about the "Yes, but he'd probably suspect you'd trade, so he'd give you the "win" envelope, but then he knew you'd suspect that he would suspect you'd trade, so he'd give you the "lose" envelope, but he knew that you'd know that he'd know that you'd suspect, so..." question. That was handled sufficiently in "The Princess Bride".

december also states that:
Furthermore, there's no way to test any answer.

Statements that are provable mathmatically or logically don't need to be "tested". However, repeated trials are useful for reassuring us that we haven't done the math wrong - they back up our deduction with induction. For example, we don't need to draw and measure a bunch of triangles to prove that the Pythagorean Theorem is true, but doing so may make us feel better about the proof. In the case at hand, it would be a simple matter to run a few trials (or a few million on a computer) and see that envelope traders and envelope keepers end up with the same amount of money in the long run.

It's worth pointing out that there are two different sorts of gambling odds: those that are purely mathmatical, and those that are judgement calls. Odds for a coin toss, a slot machine, a roulette wheel, a lottery, and our envelope problem are of the first sort. They are 100% predictable (assuming that no one is "cheating" - in effect, violating the assumptions of randomness upon which the odds are based). Odds for a horse race, or any athletic contest, or weather ("30% chance of rain"), or the "psychoanalyze the envelope giver" problem are of the second sort. They are very dependant on the analytical skills of the odds-maker, and different people could legitimately come up with different odds.

One last point: logic problems commonly bear a rather tortured relationship to reality ("People from tribe A always tell the truth, but those from tribe B always lie...", "Given a straight line on an infinite, flat plane...", etc.), as a way to explicitly limit the problem to questions of logic. The solutions are, of course, applicable to "real life" only if you keep in mind how well, or poorly, the assumptions of the problem fit the actual situations encountered. That doesn't make them any less "correct" on their own terms.

05-26-1999, 10:26 AM
SSittser comment about doing a real-world experiment made me realize something: sometimes designing the experiment makes things clearer, particularly because it forces you to make definitive assumptions.

In this particular case, let's say the envelopes are limited to \$1 and \$2 so we can do repeated tests. I assume one of the envelopes is presented to "you" at random, hence we flip a coin to determine which of the envelopes you get. Hence, on average, you will get \$1.50 handed to you (\$1 * 1/2 + \$2 * 1/2 = \$1.50).

The corrolary to this is that the "provider" had a total of \$3 for each run of the experiment, on average they'll hand out \$1.50, so they keep \$1.50 on average. This is the key part--if you swap at random or not, you'll average \$1.50.

Now, if you figure out and/or know for sure the \$1 is the smaller of the two envelopes, you'll always swap and be better off. _Any_ additional information is useful, so any time you have better than 50/50 odds of knowing which is the greater dollar amount, you'll do better than \$1.50 on average.

Ok, I'm satisfied with this now. :-)

05-26-1999, 04:50 PM
SSittser writes:

"That is, it is inescapable that there is a 50% chance of a randomly-chosen envelope being the "twice as much" envelope"

This is the nub of the problem, and I agree that it appears plausible. However, consider two points:

1. The assumption that switching is always 50-50 leads to a contradiction. It would imply that one would get a better result by always choosing to switch envelopes.

On the other hand, it is also clear that the average amount of money in the envelope opened first must be the same as the average amount of money in the other envelope. So, if we always switch, then we should wind up the same on average as if we never switched.

2. Your probabilities depend on a judgment made by an intelligent human being. The puzzle does not tell us how the donor made his decision. E.g., suppose that he started with two sets of two envelopes. One pair had \$2 and \$4, the other pair had \$4 and \$8. He chose one pair at random and gave it to you. In this case, when your first envelope is shown to have \$4, it is indeed 50-50 that the other envelope has either \$2 or \$8. If you switch, your expected value is \$5, so you are better off switching.

But, suppose the donor chose a random pair of envelope from three pairs of envelopes, two with \$2 and \$4 and one with \$4 and \$8. Now, when your first envelope shows up with \$4, the odds are 2 to 1 that the other envelope has \$2 rather than \$8. In this case, your expected value by switching is \$4, which equals what you have if you keep the first envelope. You would be neutral about switching.

Try a third possibility. Suppose the donor selected a random pair of envelopes from 4 pairs, 3 of which had \$2 and \$4 and one of which had \$4 and \$8. Now, if \$4 appears, the odds are 3 to 1 that the other envelope has \$2. You would lose by switching.

The point is that your decision depends on the procedure that the donor used to create the envelopes originally, and you simply do not know how he did it.

The reason I said that this problem could not be tested is the same reason. In order to set up a test, you would need to know (or assume) what procedure the donor used to decide how much money to put into the envelopes originally.

05-26-1999, 05:53 PM
SSittser writes:

"That is, it is inescapable that there is a 50% chance of a randomly-chosen envelope being the "twice as much" envelope"

This is the nub of the problem, and I agree that it appears plausible. However, consider two points:

1. The assumption that switching is always 50-50 leads to a contradiction. It would imply that one would get a better result by always choosing to switch envelopes.

On the other hand, it is also clear that the average amount of money in the envelope opened first must be the same as the average amount of money in the other envelope. So, if we always switch, then we should wind up the same on average as if we never switched.

2. Your probabilities depend on a judgment made by an intelligent human being. The puzzle does not tell us how the donor made his decision. E.g., suppose that he started with two sets of two envelopes. One pair had \$2 and \$4, the other pair had \$4 and \$8. He chose one pair at random and gave it to you. In this case, when your first envelope is shown to have \$4, it is indeed 50-50 that the other envelope has either \$2 or \$8. If you switch, your expected value is \$5, so you are better off switching.

But, suppose the donor chose a random pair of envelopes from three pairs of envelopes, two with \$2 and \$4 and one with \$4 and \$8. Now, when your first envelope shows up with \$4, the odds are 2 to 1 that the other envelope has \$2 rather than \$8. In this case, your expected value by switching is \$4, which equals what you have if you keep the first envelope. You would be neutral about switching.

Try a third possibility. Suppose the donor selected a random pair of envelopes from 4 pairs, 3 of which had \$2 and \$4 and one of which had \$4 and \$8. Now, if \$4 appears, the odds are 3 to 1 that the other envelope has \$2. You would lose by switching.

The point is that your decision depends on the procedure that the donor used to create the envelopes originally, and you have not been told how he decided to do it.

This problem cannot be tested or computer-simulated for the same reason. In order to set up a test, you would need to know (or assume) how the donor decided how much money to put into the envelopes.

05-27-1999, 04:49 PM
You know that one envelope has twice as much money as the other. You pick an envelope, look in it, and there's \$4 in it. There is a 50% chance that the other envelope has \$2, and a 50% chance it has \$8. Yes, your expected result is \$5. However, all of this tells you that originally there was a 50% chance of there being \$6 in the two envelopes, and a 50% chance of \$12, with an expected result of \$9 between the two, or an "expected" large envelope of \$6 and an "expected" small envelope of \$3, for an "average" envelope of \$4.5. You've already lost \$.50.

The problem here is that everyone is trying to compare the values as 2x and 1/2x. Really, you should be looking at x vs. x+x. You don't know if you have x dollars or 2x dollars. By traiding, you have a 50% chance of getting an additional x dollars, and a 50% chance of losing x dollars. Net value, 0.

If you don't believe me, set up a spreadsheet. First colum has a random number from 1-100. Second column has twice the first. The third column randomly picks from the two. The fourth column takes whichever one the first one doesn't. Make a few hundred rows of this, totaling up the third and fourth columns. Compare the two. Should be within a couple percentage points of each other. Save the results. Re-calculate. Save the results. Re-calculate a few hundered more times. Save the results. Total up the totals in the results. You should be within a few hundred thousands of a percentage between the two.

Congratulations! You just did your first Actuarial work - a Monte Carlo simulation.

------------------
"Outside of a dog, a book is a man's best friend.
Inside of a dog, it's too dark to read."
Groucho Marx

05-27-1999, 07:54 PM
A riddle from childhood:

Question: What has two horns, four wheels, says "Moo", and gives milk?

The point is that we must generally rely on the conditions stated in a problem when we try to solve the problem. The cow joke is funny (well, at least *I* think it's funny) because it's an "unfair" question that violates that assumption - that's what makes it a joke, rather than a true riddle or problem-solving exercise.

I believe december is adding theoretical conditions that are not in the original problem. This creates interesting, new problems (I particularly like the twist about choosing envelope pairs from sets of pairs), but they are not the original problem.

Essentially, you can create new problems by saying "but what if <new condition>?". For example:

- But what if you knew how much money the envelope-stuffer had to begin with? (Then, by looking at the amount in the envelope I was given, I could figure out which I had.)

- But what if the envelope giver threatened to shoot you if you lose? (Can I give both envelopes back? The only winning move is not to play...)

- But what if the money was all in pennies, and you were allowed to weigh both envelopes? (Obviously an easy win.)

- But what if you had played this game with the envelope giver 100 times before, and found that 85% of the time the giver handed you the "win" envelope? (You might reasonably be inclined to never trade, in hopes that this is a bias of the giver that's likely to continue.)

Etc. Giving me more information than the original problem contained (such as more information about how the envelopes were chosen, and out of what pool of possible envelopes) changes both the problem and the solution.

So, to solve the original problem it's not necessary to say, "but what if I knew more about how the envelopes were chosen?" You don't know. You're not told. You're handed an envelope. What do you do?

Of course in "real life" the problem's assumptions (such as random selection) might not apply. The classic story "The Lady or the Tiger" (by Frank Stockton) is similar to the envelope problem, with the crux of the story being that the selection is most definitely NOT random.

05-28-1999, 12:33 AM
Let's say we had a thousand sets of these envelopes. Now the thousand envelopes with the higher amount of money has twice as much money in them and as the thousand envelopes with the lower amount. (Notice I am not saying that you can distinguish the sets.) Therefore the "bad" envelopes contain on average \$X and the "good" envelopes contain \$2X. If you picked a bad envelope and switch, you will make an additonal \$X on average. If you picked a good envelope and switch, you'll be losing \$X on average. That's why switching balances out as you would expect. On the other hand, if you were just offered the chance to flip a fair coin to decide whether to double or halve your money, then you should definitely go for it.

This was a good problem though. The paradox really had me stymied for awhile.

06-01-1999, 01:10 PM
PTVroman writes: "You know that one envelope has twice as much money as the other. You pick an envelope, look in it, and there's \$4 in it. There is a 50% chance that the other envelope has \$2, and a 50% chance it has \$8."

SSittser writes: "So, to solve the original problem it's not necessary to say,
'but what if I knew more about how the envelopes were chosen?' You don't know. You're not told. You're handed an envelope. What do you do?'

There are two key factors to the odds of switching --
a. the donor decided how much to put in the envelopes
b. you chose a random envelope.

We know that the donor put \$2 and \$4 in the envelopes or \$4 and \$8. We are not told how he made this decision. Can we focus on (b) only and conclude that the odds are 50-50? Or must we say that the odds cannot be determined unless we know something about (a)? Is it OK to ignore a relevant factor because we lack information about it?

Try a parallel problem: What are the odds that the 1999 Super Bowl champions will repeat in Y2K, given that 20% of past Super Bowl champions have won in the following year? If you were ignorant of football and didn't know that John Elway had retired from the Denver Broncos, your answer might be 20%. Ono a college exam, this might be the correct answer to the problem as stated. Maybe we are supposed to assume that the propounder of the problem has implicitly guaranteed that all unmentioned factors should be ignored.

However, you would be foolish to bet real money if your analysis had omitted some relevant factors. You'd save money by admitting that you don't know enough to set proper odds.

This brings us to a deep philosophical question: How do you define the concept of probability, when addressing a unique, non-repeatable event? For an answer, see "Foundations of Probability" by L. Savage.

P.S. -- I have difficulty sending messages. The first effort fails to go through, then the second effort brings both copis of the message. Any suggestions?

06-01-1999, 04:10 PM
*Shit!* When I tried to reply, I must have mistyped my password. When I hit "Back," it erased my entire message! Now I have to type it all over again. :(

This is a really interesting thread. I think CK and Needa did a great job explaining the paradox. I think I finally see december's point. At first, I thought SSitter was right that december was being deliberately obscurant by ignoring what was clearly an unspoken assumption in the problem, that all factors not mentioned are deemed irrelevant. (I still do, but now I see she has a point.) That's why, in the liars and truth-tellers riddle, for example, you cant just ask, "What color is my shirt." December's point, though, is that the envelope problem is even more abstracted from reality than it seems. In reality, we couldn't assume that just because we don't know everything, that only the factors we're given affect the outcome. The Broncos example was excelent. The only problem is, in reality, nearly every event is unique and unrepeatable. According to chaos theory, every current in the atmosphere has some slight affect on whether the next coin I toss will land heads or tails (not to mention the chance that someone handed me a double-headed coin). Even though we assume that the affects will balance out in the long run, the chances of this particular coin landing heads up is probably not 50%. Or is it. Even with the Superbowl, if you bet on the previous champion at 5:1 odds every year, you'd likely break even. After all, the condition of every player affects the game, but it also affected all the previous games which led to the 20% probability. So in a way, probability theory accounts for all the variation over several events, provided the constants are all accounted for. So if you assume that there is no factor of human nature that would predispose every person to give a set amount, and you assume a set limit, like tubby did, then you could in fact form a prediction for a completely unknown donator. Am I right, here?

What did Savage say, december? How can you tell when probability theory applies to a situation? Are all probabilities really empirical? (We know that every time someone has counted the results of tossing an equally weighted coin, the numbers converged on one another, therefor, the probability must be 50%?) Help me out; this is really confusing!

PS--December, I can definitly sympathize with browser problems! ;) Your last post didn't double, so maybe you've got it sorted out. It's probably ( ;)) your browser showing you a cached image the first time. Instead of resending, just hit "Refresh" next time your post doesn't show up, and see if it updates it for you.

06-01-1999, 06:38 PM
I can't believe nobody picked up on this one:

You have a choice of two identical boxes which measure 12 X 12 X 12 inches. Which would you rather have?
A. Box #1, which is full of 10-dollar gold pieces, or
B. Box #2, which is half full of 20-dollar gold pieces.

RickG observes that:

Depends. Does a well-packed pile of the \$20 pieces take up twice the volume of the same number of \$10-pieces? If so, obviously take the full box. If the coins are of identical size, it doesn't matter which box you take. Intermediate cases need to be treated carefully.

While this is true, it doesn't answer the question. Which box would you choose, given the information that you have? One has to make certain assumptions about what constitutes reasonable behavior on the part of the mint, but I venture to say that only in highly unlikely circumstances would the full box be worth less than the half-full box. In most scenarios it would be worth more. Given the most reasonable assumptions of all, we are talking about a choice between a full box of gold and a half full box. How tough a choice is that?

06-02-1999, 11:05 AM
Alan Smithee wrote: "I thought SSitter was right that december was being deliberately obscurant by ignoring what was clearly an unspoken assumption in the problem, that all factors not mentioned are deemed irrelevant."

Cecil Adams discusses another problem and includes in his answer the words:
"Which box would you choose, given the information that you have?"

In the 2 envelope problem there are two relevant factors, the donor's decision of how much money to put in the envelopes, and our randomised choice of an envelope. Following the suggestions above, we would like to answer the problem "given the information that you have." That is, following the "unspoken assumption in the problem that all factors not mentioned are deemed irrelevant."

The key to this problem, I believe, is that it is mathematically impossible for the donor's choice to be irrelevant. Here's why:

Let's assume that the donor chose a method of putting cash in the envelopes so that his decision is irrelevant. Specifically, we assume that the donor assigned amounts of money to the envelopes so that after you chose an envelope and opened it, it would always be 50-50 as to whether the other envelope had half that amount or twice that amount, regardless of how much money you found in the envelope.

We will show that this assumption leads to a contradiction. There is no mechanism leading to this result! That's why the apparent paradox (described in an earlier post) exists.

To put this assumption in Bayesian terms, we are looking for a prior distribution of the original envelopes such that after you choose an envelope at random and open it, it will be 50-50 as to whether the amount in the other envelope is larger or smaller.
I am asserting that no such prior distribution exists.

To see why, suppose the envelope you opened had \$4. Your assumption says that it should be equally probable as to whether the other envelope has \$2 or \$8. That happens if you were equally likely to be given (2,4) or (4,8) (where the figures in parentheses indicate the amount of money in the two envelopes). In Bayesian terms, the prior probabilities of (2,4) and (4,8) must have been equal.

For example, you could assume that the donor prepared just two sets of envelopes, (2,4) and (4,8), and chose one of the pairs at random. Then, when you see \$4 in the envelope you opened, it would indeed be 50-50 as to whether the other envelope has \$2 or \$8. So far so good.

However, remember we want our assumption to hold regardless of how much money you found in the envelope. This is where the problem begins.

Once you assume that the donor chose from (2,4) and (4,8), then he might have given you the (4,8) and you might have opened the \$8 envelope. So, you must also assume that the prior distribution had equal probabilities for (8,16) and (4,8). E.g., you might assume that the donor chose randomly from 3 sets of envelopes, (2,4), (4,8) and (8,16).

However, by similar reasoning, there would need to be equal probabilities for (16,32), (32,64), etc. Even ignoring the limitation on amounts of money, this argument leads to the conclusion that in the prior distribution a countably infinite number of possibilities all would need equal probability, call it P.
But, if P > 0, then the total probability would be infinity, whereas it must be one.
And if P = 0, then the total probability equals 0, which is also a contradiction.

This contradiction shows that you cannot satisfy the assumption that after you opened your chosen envelope it would always be 50-50 as to whether the other envelope has half that amount or twice that amount,
regardless of how much money you found in the envelope.

I must admit that you could satisfy the above assumption substituting the word "sometimes" for "always." You could assume, e.g., that the donor chose randomly from (2,4) and (4,8). When you opened the envelope with \$4, it was 50-50 as to whether the other envelope had \$2 or \$8.
But, if you had opened an envelope with \$2 or \$8, then the odds would NOT have been 50-50.

IMHO, this assumption destroys any interest in the problem. I believe that the problem as stated wanted to include the assumption, above, but it cannot be fulfilled.

Finally, I must confess that my "proof" above is not written with perfect mathematical rigor, but I do believe that it is correct.

P.S. It was asked what L. Savage said.
He discussed how probability could be defined when the frequentist approach did not apply, say, because one was considering a unique, one-time event rather than repeated trials. One key point of his brilliant book is that a judgmental probability can be consistently defined
as personal opinion. The definition of this probability could be deduced in theory from which side a person would take in a series of bets. A person's subjective probability would obey the normal mathematical laws of probability.

However, there would be no contradiction if two different people assigned different
probabilities to the same event.

06-02-1999, 11:49 AM
These answers have been fascinating. I am still forming an opinion as to which one is likely right, but I do have some definite views on which are just red herrings. Let's knock out a few of them and focus back in on the main point.

1. Red herring 1: The odds aren't actually 50/50

This is a red herring, because you are assuming information not in the problem.

If I flip a coin and hide it from you, the chances are in reality 100% that it is heads and 0% that it is tails (or vice versa), not 50% each way. However, if I were to ask you odds, or to place a bet, and to repeat the experiment, you would have to calculate a 50% probability of either case. Of course, there is more information to be known (if you peeked, for example), which would certainly change the odds.

To bring it back to the envelopes, yes there is certainly other information you could try to extract that would change the odds. However, the point of the problem as stated is that you only know a single fact: one envelope has twice the money as the other one.

2. Red Herring 2: It doesn't matter if you switch because the expected value of the first envelope is not x, it is 1.5x

Actually, this isn't really a red herring as I originally stated the problem -- it does address the problem of just picking up an envelope then switching it.

However, if you open the envelope (the equivalent of half-peeking), then the \$ in the envelope become the base you are comparing to.

Now that you know the value of the envelope is X (whatever it is), then the value of the other envelope really is a 50% chance of being half that (0.5X) and a 50% chance of being double that (2X), which leads us back to the expected value of 1.25X.

Red Herring 3: This is somehow related to gold coins in a box

Imagine my pleasure at reading Cecil had responded to my thread. Then imagine my disappointment at finding out he responded to a tangent from 2 weeks ago. Sigh.

My bet on the right answer:

I'm betting PTVRoman is right, but I'm not sure why it doesn't match the expected value equation. When you first pick the envelope, you either picked \$x or \$(x+x). You don't know which.

Now there are two possible cases. Either you picked \$x (50%), and by switching, you will add \$x.

In the second case, you picked \$(x+x) (50%) and by switching, you will subtract \$x. This is the right equation to use, clearly, since it gives the right odds (and Ssitter, I think this is where you were going with your "gambling odds" argument).

Key Remaining Question:

However, this still doesn't explain why the expected value function doesn't seem to work properly in the case. Something about how the equation as stated doesn't accurately reflect the problem, but I'm not quite sure what. It seems straightforward: if \$x is the amount in the envelope, then you have either a 50% shot at 0.5x and a 50% shot at 2x, which does work out to 1.25x. What's wrong with the construction?

06-02-1999, 04:51 PM
Tubby writes: " Again, just knowing that one envelope has twice the money of the other envelope, the chances may not be 50%/50%, but they've got to be pretty close."

Tubby's probability assessment is fine, as far as it goes. In fact, there would be no mathematical inconsistency if he believes that when \$4 comes up in envelope #1, then the odds of \$2 or \$8 in envelope #2 are exactly 50-50.

However, what happens when Tubby considers ALL the possible amounts of money that might show up in the envelope #1? It is possible to expand his beliefs into a complete set of probabilities, which would cover every amount of money that might be found in envelope #1. Infinitely many systems of probabilities for these envelopes could be created, consistent with the laws of probability.

However, one cannot expand into a system of probabilities in which the chances would ALWAYS be 50-50, regardless of how much money is found in the envelope #1. No such system of numbers exists that satisfy the laws of probability. This is GOOD, because it reconciles the apparent contradiction.

The apparent paradox of this problem says that:
1. You should always switch, since it's 50-50 and you have larger amount to gain than to lose.
2. It's irrelevant to switch, since Envelope #2 is no different from Envelope #1; you selected one at random.

There's no contradiction because it's NOT ALWAYS 50-50. That would be mathematically impossible.

Or, put it another way. Suppose you see \$X in envelope #1. If your subjective probability for \$X is 50-50, then you should switch. But suppose your subjective probability corresponding to \$X is that odds are greater than 2 to 1 that envelope #2 has less money than envelope #1 (perhaps because \$X was a very large amount of money). Then you shouldn't switch. But, in the latter case, it's no longer true that the two envelopes are identical.

BOTTOM LINE -- SHOULD YOU SWITCH? Is there a right answer? Are there "true" odds that switching envelopes will produce a greater amount or lesser amount of money? First of all, these probabilities are not objectively measurable quantities, like mass or length. At best, we can describe our own subjective probabilities (see prior post). Each of us may have different subjective probabilities.

So, is there some "natural" set of subjective probabilities? I'd say not, for two reasons.

There's no mathematical answer. The only "natural" system of probability beliefs would be that the odds are always 50-50 regardless of the amount of money in envelope #1. Since this is mathematically impossible, there's no other canonical way to say what the system of probability beliefs ought to be.

There's no real world answer, either. The Monte Hall problem is an abstraction of a real-world situation, but the two envelope problem isn't. In the real world, people haven't given away money this way. So, there's no real world natural odds either.

06-02-1999, 04:56 PM
I'm betting PTVRoman is right, but I'm not sure why it doesn't match the expected value equation. When you first
pick the envelope, you either picked \$x or \$(x+x). You don't know which.
First, just for my ego's sake, I want to point out that I posted the same solution four hours before PVTRoman, according to the time stamps anyway. Second, I'm surprised that confusion has persisted after my and PVTRoman's solutions should have cleared everything up. I blame the appearance of "Bayesian" arguments, which have as much place in discussing the science of probability as a shaman does on the board of the AMA. The fact is that as long as you are choosing the envelopes randomly, and not being influenced by anyone who has knowledge, of what's inside them, switching or not switching will be irrelevant. On the average, when you choose a good envelope it will contain twice as much money as when you pick a bad envelope. This is true without the need for you to make psychological diagnoses of the donor's "upper limits". Therefore, on average, you are risking the same amount by switching or not switching. I admit that you might get confused by looking at a single trial, but you have to look at a whole series to make any meaningful statement about probability.

06-03-1999, 12:04 AM
P.S.: December, that is an interesting point about 50%/50% chances not being possible because then all pairs must have been possible.

The implications of that don't make sense, though. Again, just knowing that one envelope has twice the money of the other envelope, the chances may not be 50%/50%, but they've got to be pretty close. The reason is this: you are saying that if you receive an envelope with \$8 in it, you would be less likely to switch than if you received an envelope with \$4 in it, because the \$8 is somehow marginally closer to the "upper limit". However, considering that we don't know the upper limit and we can assume if we received a reasonable amount of money (\$4 and not \$5 trillion, for example), that the chances are still pretty close to 50/50. Our estimate of the chances of the second envelope being higher would go down the closer the first envelope came to our estimate of "the limit", but at low \$ values, our estimates wouldn't deviate significantly enough from 50% to change the answer. (i.e., even at 60%/40%, you should still switch).

Thus, even if we grant your notion that we should take into account external factors, they don't change the problem enough to resolve the apparent paradox.

06-04-1999, 12:19 AM
Mr. Charles, my apologies, you were in fact several hours early with the right way to think about the problem.

However, I'm not entirely certain I agree with the analogy between Bayesian expected value analysis and shamanism. It's simply a mathematical way of calculating chains of events. I'm just not sure why it falls apart here (and theoretically, it shouldn't, so clearly there is some trick in the problem's construction which we have as yet been unable to tease out).

Finally, december, as far as I can tell, your answer is "when faced with any situation involving money, the answer is unknowable". Given that you know ahead of time that one envelope contains twice the money of the other one, I am not sure how you can claim that chances aren't exactly 50% that you picked the lower one as opposed to the higher one. The information about the amount of money in the envelope shouldn't change your estimate of those chances if, as you claim, that is actually no new information given the unknown set of possible envelope pairs.

I think we can conclude that we know officially know how the problem should be approached (the x and x+x model discussed by Greg Charles, PTVRoman, and Ssiter). Now, how can we tease out the false construction in the expected value equation?

06-04-1999, 08:36 AM
For decades a "civil war" has raged between Bayesian and Classical statisticians. Judging from Greg Charles's latest post, he may be a Classical statistician. I confess to being a Bayesian.

Tubby raises a good point. When does one have enough information to be willing to risk one's money? As a reinsurance actuary, my job is to bet the company's money on various random events. One part of the job is to create a model that forecasts a profit. Another part is to decide whether the model is reliable enough. People sometimes have different opinions, depending on which facts they focused on and how they analysed them. In the short run, a decision may be a matter of opinion. However, in the long run, some companies go bankrupt while other prosper. So far, we're in the latter group.

To get a flavor of how to evaluate the validity of a model, try the following two multiple choice questions:

Question #1: You're in a gambling casino and offered a chance to play a new game. All you know about the game is that if you bet \$100 you will either lose \$50 or win an additional \$100? Should you play?

A. You should play the game, because the amount you could win is twice the amount you could lose. Since you know nothing about the odds, you can assume that they're 50-50.

B. You shouldn't play. You should assume that the casino has arranged the odds so that they're in the house's favor.

C. You shouldn't play because you don't know enough.

Question #2: There is a business opportunity where your potential gain is twice your potential loss. Should you do the deal?

A. Do it. You have more to gain than to lose.

B. Don't do it. Someone else may know more than you do. The deal may favor that person.

C. Don't do it. You don't know enough.

My answers to both #1 and #2 would be B or C.

So when SHOULD you risk money on a deal? In my opinion, the time to risk money is when you are the expert, who knows more than the next person.

06-04-1999, 09:01 AM
Not to be contrarian, but I still cast my vote with Holger and those who agree with him.

Hr identified a strategy which works under a broad range of reasonable probability distributions of amounts in the envelope pairs: Pick a number that you feel is somewhere near the high end of your estimate of the offerer's budget. If the first envelope contains more than that, keep it; if less, switch.

We've mainly discussed the case where S (the amount in the smaller envelope) is uniformly distributed between 1 and 100. Of course, you don't know the actual probability distribution. But Holger's strategy works in this case, and also for other reasonable probability distributions, described below.

To demonstrate this for the uniform distribution case, I would propose modifying PTVRomman's excellent suggestion of a spreadsheet as follows:

Add a cell containing the actual containing the offerer's actual top value for S (say 100);

Add a cell containing your estimate of the top value of S (initially 100);

Change the final column formula to keep the original amount if the value of X (the amount in the first envelope) is greater than your estimate of the top value of S, otherwise switch.

If you run this trial repeatedly, you'll see that this strategy increases your payment over the long run.

But what if your estimate was bad? Change your estimate of S(max) to 10. You're still better off, as you are if you estimate S(max) = 150. If you estimate S(max) to be 200 or more, you wind up switching all the time, so the advantage disappears.

So, tubby, you were right to be troubled by the expected value issue.

The probability distribution could also be more like a sweepstakes model. Say your local gambling den offers you an envelope of casino bux as a thankyou for spending an evening enduring a presentation on the pleasures of visiting their establishment. But they've actually prepared envelope pairs containing \$S & \$2S & offer each attendee the privilege of switching. Should you?

You might reasonably guess that the distribution of S is loaded up with smaller amounts, with the probability thinning out as S increases. In this case, the conditional probability that X = 2S is actually > 0.5 (i.e., the frequency of (.5X, X) envelope pairs is greater than the frequency of (X, 2X) pairs). But Holger's strategy still works, except that your criterion amount should be set somewhere below your S(max) estimate, depending on the steepness of the distribution.

06-24-1999, 03:28 PM
To address this problem from the point of view of "ideal" logic or "ideal probability," the error that has been being made is in trying to *quantify* the chance of getting a better envelope by switching. (In a "real world" scenario, all that stuff about the motivations of the game creator, estimated limits of the possible payout as a way to add information to the game, etc. do apply. But I want to talk about the "pure" game.)

Essentially, all the information about numbers and chronology is a red herring. This game can be restated as follows: "Here are two envelopes. The first one has some money. The second one has some money. Pick one envelope or pick the other." Everything about ratios, counting the money in one envelope or not counting it, switching envelopes rather than just picking one or the other, etc. is INSIGNIFICANT information - it does not affect the game. (In the real world, of course it applies. You count the money in the first envelope, if you're happy you leave.)

Look at it this way - suppose you could choose to play one of two variants of this game, one where the ratio was 2x and the other where it was 50x. Does that "change" the odds of getting the envelope with more money? You have calculated that the "expected value of the second envelope is 1.25M [=(.5M+2M)/2]" in the first game - but in the second game isn't it 25.01M? Obviously that's ridiculous - in either game you have the same chance of picking the envelope with the greater amount of ducats.

Suppose we added MORE information as follows: in the first game, the envelopes contain \$45 or \$90, in the second they contain \$2 or \$100. Would anyone choose to play the second game simply because they have almost 20 times the chance (25x vs. 1.25x) of improving their payout by changing envelopes under its rules? I bet most of you would still play the first game - and your calculation would be based on "real world" calculations involving your level of satisfaction with the proposed payouts, regardless of the "odds."

BTW, by the same logic, showing the contestant one of the remaining two doors adds no relevant information in the Monty Hall problem on the other thread.

06-25-1999, 01:52 AM
"BTW, by the same logic, showing the contestant one of the remaining two doors adds no relevant information in the Monty Hall problem on the other thread. "
~Sayeth David Forster

You should switch. That's all I've to say.

------------------
"All I say here is by way of discourse and nothing by the way of advice. I should not speak so boldly if it were my due to be believed." ~ Montaigne

06-25-1999, 11:35 AM
Suppose the envelope you opened had \$8. You now know that the pair of envelopes originally contained 4&8 or 8&16. Consider two pairs of events:

A. Before an envelope was opened, the pair of envelopes originally contained 4&8 or they contained 8&16.

B. After an envelope was opened and \$8 was seen, the other envelope contains \$4 or it contains \$16.

DavidForster wants to make a decision about B without considering A. However, he is effectively making an assumption on A, because the probabilities of the two cases in A turn out to be equal to the probabilities of the two cases in B. (See PROOF below.)

After seeing \$8, DavidForster assumed that the chances of the other envelope having \$4 or having \$16 were 50-50. (This can be seen because he used the 50-50 probabilities to calculate his expected value.) Therefore, he implicitly assumed that the original probabilities were also 50-50.

If the probability of 4&8 is p, the expected value of switching is p*4 + (1-p)*16. This expression will be greater than 8 whenever p is less than 2/3.

In summary:

1. If you think that the original probability of 4&8 was less than 2/3, then you should switch envelopes.
2. If you think that the original probability of 4&8 was greater than 2/3, then you shouldn't switch.
3. To make a sensible decision, you should not ignore the original probabilities.

PROOF using Bayes Rule:
Suppose p was the original probability that the envelopes contain 4&8 and (1-p) was the probability that they contained 8&16. After an envelope is randomly selected and \$8 is seen, the conditional probability of \$4 in the other envelope is

p*.5/[p*.5 + (1-p)*.5] = p

07-08-1999, 09:53 AM
ARGH!!! Good LORD!

A Cecil sighting and only one person commented?!! NOBODY NOTICED?!
Where was I? Dangit. Which way did he go?!

Btw.. he was the only one who answered my riddle correctly. My life is complete.

07-08-1999, 01:19 PM
Nickerz, I was disappointed in Cecil's answer. The point of the problem is not simply to get a "correct" answer, but to to get some insight into probabiolity theory.

How would you answer this problem (which I believe is roughly equivalent):

You are offered an opportunity to play a gambling game where the amount you could win is twice the amount you could lose. You do not know what the odds are. Should you play?

07-09-1999, 01:49 PM
I think Cecil's answer was complete and correct. A mark of true genius is the ability to sniff out a red herring, and act like you didn't smell anything. The denomination of the gold coins has nothing to do with their value. Their value is set by the mass of the coins. Even if these coins were minted during the gold standard, it seems highly likely that a full box of gold disks would be more massive than a half full box of larger disks.

07-23-1999, 08:03 PM
The denomination of the coins has everything to do with their value. Their value is not necessarily set by the mass of gold. Their "real" value is set by whichever is greater - denomination, or gold value by mass. That is, if a \$20 coin contains 1 oz of gold, and gold is selling for \$25/oz (silly numbers, obviously), then the coin is to me a lump of gold worth \$25, and I'll treat it as such. But if gold is selling for \$15/oz, then the coin is just a coin, the material it's made of is irrelevent, and it's worth \$20 to me.

Cecil's answer assumes that at least one of the following is true: (1) Only the volume of gold - not the denomination of the coins - matters. In that case, a full box of gold is always worth less than half a box of gold. Or: (2) \$20 gold pieces occupy more volume per coin than do \$10 gold pieces. In that case, even if it's the denomination (rather than the amount of gold) that matters, there are at LEAST twice as many 10's in a full box as there are 20's in half a box. I believe this is what he means by "assumptions about what constitutes reasonable behavior on the part of the mint".

Assumption (1) is certainly plausible, and (2) is pretty likely - but only because we happen to have "outside information" that that's the way mints typically make GOLD coins. However, they certainly don't make ALL coins that way. For example, I'd much rather have half a box of dimes than a full box of nickles (and that has nothing to do with the materials they're made of, which are of trivial worth for both).

Thus, Cecil's assumptions, though certainly not unreasonable, firmly remove the question from the realm of logic problems, and place it into the rather different realm of theorizing what decisions have been made by a particular group of people (the coin designers at the unidentified mint) based on our experience with similar decisions made by similar groups (other mints we're familer with). If we're to rely merely on logic, and not our sense of what a mint is likely to do, then the answer to the question is "not enough information given"

07-24-1999, 03:23 PM
The general point Cecil was addressing seems to how one should answer an inadequatly-defined question. He says, "...Which box would you choose, given the information that you have? One has to make certain assumptions..."

When confronted with a vague question, Cecil's approach is to make assumptions that permit an answer. Not only is Cecil an expert on every topic, but he is an expert's expert on the topic of how to answer questions.

However, there are situations where the greater wisdom may lie in recognizing that the facts provided really offer no answer. Arbitrary choices would offer trivial examples. E.g., a hotel has just two vacant rooms available, one on the 15th floor and one on the 14th. Which should you choose? Or, suppose one of the rooms was on 13? It's easy to say you shouldn't avoid the room on 13 because of "bad luck", but knowing only the floors is not enough information to prefer one room to the other.

There are real-world questions that must be fleshed out with your beliefs or assumptions. E.g., is a National Health plan a good idea? The answer depends on how you assume the plan will be constructed, how well you assume it will work, what you believe the alternative would be, etc.

BTW note that Cecil is better served by adding assumptions that lead to an interesting discussion. It would be boring if his column frequently consisted of some question followed by a response of, "That question includes too little information to be answered." (By the way, would this be an example of a "stupid question?")

08-03-1999, 04:33 PM
I'm a bit nervous about December's use of Baynesian probability theory on the two envelope problem.

I leave aside 'game show' circumstances (where you have some extra information).

I state that by opening an envelope, you don't learn anything, so the probabilities don't change. It's 50-50 that you'll gain or lose the SAME AMOUNT by switching.

If you find \$8 in the first envelope you calculate that you can get either \$4 or \$16, and switch. But now imagine you decide to switch first, then open the envelope. You find \$8, do the same calculation and swop back to the first envelope.

What this means is that your calculation is wrong, not that it's worth switching!

Try this. At some point you open an envelope and find \$8. Now the envelope stuffer either (50%) put in a total of \$12 (8+4) or (50%) put in \$24 (16+8). In the first case, you would win ON AVERAGE \$6 over several plays, so swopping or not swopping gains or loses \$2 (8-6=+2, 4-6=-2).
In the second case, you would win ON AVERAGE \$12 over several plays, so swopping or not leads to a \$4 profit or loss.

08-05-1999, 11:33 AM
Not to beat a dead horse (whap!), but there is only one chance here and no information on the motives of the stuffers. Here is my point of view: I would take the swap because, besides always wondering about the other envelope, I would loose much less then I would win. Half as much in fact For example, if my envlope had \$4 and I traded and got \$2, I lost \$2. But if I traded and got \$8 I gained \$4. \$2 loss -- \$4 win. My expected win for trading is greater then my expected loss. In other words, you would be just as likely to lose as win but winning would be sweeter then loosing bitter.

Of course I could just kick the stuffer in the shins and take both envelopes. Serves them right for provoking me so.

dan

glee
08-05-1999, 04:25 PM
Danno, you're fine about all the side effects - you only have one chance; you might get annoyed with the envelope stuffer etc.
But I think your maths is wrong. Just because you know how much is in the envelope doesn't mean you should switch! You don't 'win' more than you would 'lose'. Please see my earlier posting for my analysis.

Danno
08-06-1999, 08:41 AM
Thanks glee, but I'm not sure you understand what I'm getting at. You would not win more times then you would lose, winning is just more rewarding then losing is punishing. Lets go over my math, point out any mistakes that you see please:

Envelope that you pick has \$8

If you win you get \$16

16 - 8 = 8

so \$8 is your expected win

If you lose you get \$4

8 - 4 = 4

so \$4 is your expected loss.

8 > 4
expected win > expected loss

Therefor take the second envelope.

glee
08-06-1999, 10:41 AM
Danno,

I agree winning is more fun than losing, but I don't agree that you make money by switching (though it's tough to prove it!).

Imagine you and a friend both have envelopes. One has twice the amount of the other one. You both secretly look in your envelope. You both decide to switch! One of you loses, one of you gains - and it's the SAME amount of money.

What this means is that your calculations are wrong, not that it's worth switching!

Try this maths. At some point you open an envelope and find \$8. Now the envelope stuffer either (50%) put in a total of \$12 (8+4) or (50%) put in \$24 (16+8). In the first case, you would win ON AVERAGE \$6 over several plays, so swopping or not swopping gains or loses \$2 (8-6=+2, 4-6=-2).
In the second case, you would win ON AVERAGE \$12 over several plays, so swopping or not leads to a \$4 profit or loss.

Hope this satisfies you - it's a great problem.

glee
08-06-1999, 10:52 AM
Danno,

I agree that winning is more fun than losing, but not that you will win more than when you lose.

Try this:

You and your mate both have an envelope. One contains twice as much money as the other. You both secretly peek. Following your logic, both of you swap! Now one loses and one gains and it's the SAME amount.

You need to consider the average over several trials to get the correct odds. Otherwise it's like tossing a coin once and saying it always comes up heads.

See my earlier posting at the end of page 3 for a more maths.

-----------------

glee
08-06-1999, 10:55 AM
Sorry about the near double posting - my browser denied the first message had got thru!

december
08-07-1999, 12:15 AM
Glee, you're right that the amount of money that could be gained by switching is twice as much as the amont it could lose.

Danno, you're right that if one were to play this game repeatedly, always switching would have the same expected value as never switching.

This apparent paradox can be resolved by focusing on the probabilities. (Bear in mind that Expected Value is the based on amounts as well as the probability of achieving each amount.)

When you see \$8, you know that the original envelopes contained a total of \$12 or \$24. The problem doesn't tell you the probabilities of these two events. It may be reasonable in this case to assume that it's 50-50. However, that does NOT mean that you can ALWAYS assume that the probabilities are 50-50, regardless of the amount of money you see. Here are three reasons why not:

1. Suppose amount of money in the opened envelope was an odd number. Then, you know that you have opened the smaller of the two envelopes.

2. Suppose the amount of money in the opened envelope is very large. Then it may be more likely that you opened the larger of the two envelopes.

3. If one assumes that the probability is always 50-50, then a mathematical contradiction occurs. The contradiction is precisely that you would both simultaneously be right. That is, you could prove that switching doesn't help and also that it does help.

december
08-07-1999, 12:19 AM
Glee, you're right that the amount of money that could be gained by switching is twice as much as the amont it could lose.

Danno, you're right that if one were to play this game repeatedly, always switching would have the same expected value as never switching.

This apparent paradox can be resolved by focusing on the probabilities. (Bear in mind that Expected Value is the based on amounts as well as the probability of achieving each amount.)

When you see \$8, you know that the original envelopes contained a total of \$12 or \$24. The problem doesn't tell you the probabilities of these two events. It may be reasonable in this case to assume that it's 50-50. However, that does NOT mean that you can ALWAYS assume that the probabilities are 50-50, regardless of the amount of money you see. Here are three reasons why not:

1. Suppose amount of money in the opened envelope was an odd number. Then, you know that you have opened the smaller of the two envelopes.

2. Suppose the amount of money in the opened envelope is very large. Then it may be more likely that you opened the larger of the two envelopes.

3. If one assumes that the probability is always 50-50, then a mathematical contradiction occurs. The contradiction is precisely that you would both simultaneously be right. That is, you could prove that switching doesn't help and also that it does help.

glee
08-07-1999, 04:32 PM
December, I think you've got Danno and my positions reversed.
In any case I'd like to concentrate on the pure problem (I agree it makes a difference if you're on a game show etc.)
I say it doesn't make any difference to either your chances of winning or the amount you win / lose whether you open the envelope or not!
Your chances of winning on any one play are 50%. The amount you win or lose in the long run is \$0.
The maths is precise (it always is) - the apparent contradictions come from either a loose statement of the problem or a misunderstanding of the maths.
(Please don't be offended - I myself find this stuff both difficult and fascinating).
Try thinking about it like this. You hold one envelope. Another guy holds the other one. (He doesn't do anything - you make all the choices).
You peek in your envelope and find \$8. You decide to swap. Suppose the other guy has \$16. You 'win' \$8. But the other guy 'loses' \$8. The game balances.
Now imagine you're the one with \$16. You swap (according to your theory). Now you lose \$8 and the nonentity wins \$8. The game is still balanced.
Perhaps you have chosen bad examples, because looking in an envelope and saying you'll always swap leads nowhere.

08-08-1999, 08:36 AM
You guys are unbelievable. CKDextHavn correctly explained this problem in May. It's August now. All this baloney about how much you'd "win" by switching is irrelevant. Go outside and play.

glee
08-08-1999, 07:18 PM
I am honoured that the man himself has answered my posting (even if was only to tell me I'm months late!)

I realise the right answer appeared in May, but December doesn't agree, so I thought I'd try my hand at it.

Also I'm relatively new at this Internet stuff. On another board, because I was also a recent poster, most people assumed I was just an alias of another user.....

P.S. I really like your column - there's far too much waffle and pseudo-science elsewhere on the Internet.

NanoByte
08-08-1999, 11:53 PM
I don't know what all the answers given here were to the problem tubby originally posted here, but given nothing in his statement of the problem to make one believe the original choice of envelope contents was not 50/50 as to whatever their differing contents might be -- I say that, whether or not you look at the contents of the first envelope (this is not quantum mechanics), you have no reason to believe there should be any preference, probabilitiwise, as to which envelope you should accept, period.

Ray

december
08-09-1999, 03:14 PM
Cecil, my rapture that you're reading my posts is tempered by your endorsement of CKDextHavn's probabilistic naiveté.

CK started well on 5/18 by assuming that the two envelopes contain X and 2X. The contestant has two unknowns. He doesn't know whether he opened the larger or smaller envelope, and he doesn't know the value of X. A model is needed that reflects both unknown quantities. However, CK's 50-50 probability assumption focused only on whether the contestant has the larger or the smaller envelope. This limited model was insufficient to resolve the contradiction, so CK made up an excuse:

"The seeming paradox arises because you are applying the expected value to a single trial...you have to think of it as the expected results after many, many trials."

(See my 5/26 post for an explanation of the paradox and its proper resolution.)

On May 19, CK again started well by saying:

"The problem is that your perception of the probabilities and the actual probabilities are different."

By "Your perception of the probabilities" CK was referring to the contestant's subjective probability, I assume. This is good.

However, there are no "actual probabilities."
Probability is not an objective physical quantity like mass or temperature. Probability and expected value are defined relative to a particular model or set of assumptions. (An exception might be a special case where symmetry makes the probabilities clear, like dice. Even here, one makes the assumption that the dice are not loaded.)

In the two envelope problem, subjective probability is the only type of probability available.

CK later says:

"Now, suppose the contestant is allowed to open the envelope and switch if he wants. The fact is that the contestant gets no new information by having opened the first envelope."

Not necessarily. The contestant may gain information by opening the envelope. E.g., if the envelope contains a very large amount of money, the contestant is likely to have chosen the larger one. Or, if the amount of money is an odd number, then the contestant certainly has opened the smaller envelope.
Later, CK says:

"This is an erroneous calculation, because the chance of \$2 is in fact zero, but the contestant doesn't know that. "

In a trivial sense, there was never any randomness, since the donor had already decided how much to put in the envelopes,
and the contestant had already chosen one. However, both of these events are unknown to the contestant, so he can use probabilities to discuss and analyze them.

BTW, I am a former Chairman of the Examination Committee of an Actuarial Society, so my posts on this topic ought to have some validity.

December

glee
08-09-1999, 05:54 PM

Probability is the % chance of somehting happening. It's derived mathematically. In the case of something straightforward like coin-tossing or the dice game craps, you can work it out precisely. There are more complicated cases, such as predicting the result of something from a sample, where you express the chances to a level of confidence.

There's no mathematical concept called 'subjective probability'. You can watch gamblers saying 'Red's come up on the Roulette wheel 5 times - it must come up Black next spin'. Ignoring zeroes on the Wheel, the chances are still 50%, but the gamblers don't understand that. This mistaken belief could certainly be called 'subjective probability', but it's nothing to do with maths!

This envelope game turns out to be quite simple in probability terms, but is very easy to misunderstand initially.
I already agreed that if you're on a game show, there might be reason to change the probability. But with two envelopes containing x and 2x (OK, I also state x is not odd!), then there is exactly a 50% chance of getting the larger amount and 50% of the smaller. Looking in an envelope doesn't change these probabilities at all.

That's it, honest!

P.S. You mention your post of 26/5. It states:
"1. The assumption that switching is always 50-50 leads to a contradiction. It would imply that one would get a better result by always choosing to switch envelopes."

Switching is always 50-50; you can gain (2x-x) = x, or lose (x-2x) = x. The 'contradiction' is just in your thinking. WHY does this imply you get a better result by switching?

"2. The reason I said that this problem could not be tested is the same reason. In order to set up a test, you would need to know (or assume) what procedure the donor used to decide how much money to put into the envelopes originally."

OK, he used a RANDOM procedure. This means that your chances of having either x or 2x are equally likely i.e. 50%. This is mathematically acceptable (honest!).

"The problem is that your perception of the probabilities and the actual probabilities are different.

By 'Your perception of the probabilities' CK was referring to the contestant's subjective probability, I assume. This is good."

I'm sure CK can defend himself, but while I'm here let me say I'm 100% sure that CK means YOU, December, have a perception that is different from the actual one. Not good, I'm afraid.

Finally although you held a distinguished post, it doesn't means you're right. Even Cecil can make a mistake (am I allowed to say that?!). The key to this is reading the arguments carefully.

08-09-1999, 08:29 PM
Hey, December, I have a Ph.D. in math and I am an actuary (although I never wanted to be on one of those exam committees) so let's not bandy credentials, eh?

The major point is one of perspective. And I am excluding the idea that you can tell by size ("If you open the envelope and it has a zillion dollars, you know the other is less")... that's kind of like assuming that the coin falls on its edge. The problem assumes that the contestant doesn't know the rules for determining the amounts in the envelope, and that opening one envelope won't give him a clue about the other.

From the perspective of the person who has set up the envelopes, there is X in one and 2X in another, and the expected value is 1.5X. This is pretty obvious, and this is a reflection of reality as well.

The contestant however doesn't know this. He is forced to approximate probabilities, based on insufficient information. He opens an envelope; let's say it contains X (he has the smaller amount, but he doesn't know that.) He knows that the other envelope could contain 2X or could contain 1/2X. He estimates that his expected value is 50% (2X) + 50%(1/2 X) = 1.25X. This is incorrect, of course, but it's the best guess he can make with the information he has.

It's incorrect because the 50% is not correct. There is no play of the game that will give him 1/2 X.

The same logic applies if he has originally chosen the 2X (the larger envelope.) He estimates his expected value as 50% (X) + 50% (4X) = 2.5 X (which is actually the same estimate, it's still 1.25 times the contents of his envelope.)

Now imagine a large number of players, all facing X and 2X (but they don't know that). Whether a player switches or not, the group of players will (on average) come out with 1.5 X, the median amount. They will have calculated an expected value of 1.25 X but that is (as noted) based on insufficient information.

It seems to me that the sensible analysis before picking an envelope is for the contestant to note that he has a 50% chance of winning (getting the high amount.) Opening one envelope does not alter those odds. The calculations of 1.25X are bogus.

... but what are the odds of two actuaries being on this Message Board?

[Note: This message has been edited by CKDextHavn]

glee
08-10-1999, 09:20 AM
I must try to stop doing this - but it's fun!

OK, I'll take on 2 actuaries (and as I approach retirement I am grateful for your hard work).

The envelopes contain x and 2x. There are no outside clues.
Your expected win is (x+2x) / 2 = 1.5x.
Here's where I think the misunderstanding creeps in:
If you have the 'x' envelope, you can win if the other envelope has '2x'. Total win = 'x'.
But if the other envelope contains 'x/2'....
stop! that's the incorrect assumption!

The only true possibilities are:
If you have the 2x envelope (50%), then the other one contains x.
If you have the x envelope (50%), then the other contains 2x.

There is no case of x in one and x/2 in the other.

I know CKDextHavn has said this already- I'm just trying to express it differently so December is satisfied.

P.S. My nephew is an accountant - and so is his girl-friend...

december
08-10-1999, 01:21 PM
CK -- thanks for identifying your background. You inspired me to find the relevant journal article, which is from the American Statistician Volume 46 #4 November 1992

Ronald Christensen and Jessica Utts: Bayesian Resolution of the ``Exchange Paradox'' .... 274--276

http://www.math.utah.edu/ftp/pub/tex/bib/toc/amstat.html#46(4):November:1992

The authors' view is similar to mine. Here are a few quotes:

"In this article we present a paradox that can be used to illustrate Bayesian principles in the classroom. The paradox is also resolved using a frequentist argument and illustrates how the misapplication of a symmetry argument causes problems."

"One of the arguments in favor of using Bayesian methods is that prior assumptions are made explicit rather than being incorporated implicitly into the solution of a problem."

"The paradox of this problem is that the rule indicating that one should always trade is intuitively unreasonable, while the method of arriving at the rule seems very reasonable."

"The conclusion that trading envelopes is always optimal is based on the assumption that there is no information obtained by observing the contents of the envelopes. From a Bayesian perspective, the key to a successful analysis is in recognizing the potential information to be gained from the observation.

Mr Thin Skin
08-10-1999, 01:37 PM
You probabilists always give me headaches. Below is Bayes's Theorem

P(A|B) = P(A)P(B)/(P(A)P(B)+P(NOT A)P(B|NOT A))

P(A), P(B|A), and P(A|NOT B)

For that matter, what's A and B?

december
08-10-1999, 03:13 PM
Mr Thin Skin:

Suppose you opened an envelope and found \$8. Then the other envelope had \$4 or \$16.

A = the event that the original pair of envelopes had \$4 and \$8.

Not A = the event that the original pair of envelopes had \$8 and \$16

B = the event that you found \$8 when you opened the envelope.

You're seeking P(A/B) -- that is, given that you found \$8 in your envelope, you want to know the probability that the other envelope has \$4 rather than \$16.

The use of P(A) and P(NOT A) in the equation explicitly addresses the original probabilities of the two envelopes having \$4 and \$8 or having \$8 and \$16.

Prob(B/A) = 1/2 That is, if the two envelopes contained \$4 and \$8, then your chance of opening the one with \$8 was 50%.

Prob(B/NOT A) = 1/2. That is, if the two envelopes contained \$8 and \$16, then your chance of opening the one with \$8 was 50%.

I'm having a little trouble sorting out all the parentheses in your expression. But, in this case, since P(B/A) = P(B/NOT A) = 1/2, Bayes Rule will simplify to

P(A/B) = P(A).

C K Dexter Haven
08-10-1999, 03:39 PM

C K Dexter Haven
08-10-1999, 10:22 PM
BTW, I trust you all understand that this game could never arise. Supposing that we must use an integer number of dollars (same argument works if we have to use an integer number of cents), and we are in charge of putting the money in the envelopes.

Note that we could never put an odd number in an envelope. If a contestant opened an envelope with \$3, he would know that he must have the smaller amount, since the other envelope could not hold \$1.50.

Now, what's the smallest amount that we could put in an envelope?

(a) Well, obviously, we could not use the situation of \$1 and \$2; same logic. If the contestant got the envelope with \$1, he'd know that he had the smaller amount, and he'd know to switch to win the maximum.

(b) We could not use \$2 and \$4, either. Why? Because if the contestant opened the \$2 envelope, he'd know that the other envelope couldn't contain \$1 (see (a)) so it must contain \$4, so he would know he had the smaller amount, and he'd know to switch and win the maximum.

(c) We could not use \$4 and \$8, because if the contestant opened the \$4 envelope, he'd know the other envelope couldn't contain \$2 (see (b)), so he would know he had the smaller amount, and he'd know to switch and win the maximum.

(d) We could not use \$8 and \$16, because if the contestant opened the \$8 envelope, he'd know the other envelope couldn't contain \$4 (see (c)), so he would know...

So, in fact, the game itself is a paradox that could never be played.

December -- don't tell them! Let them figure it out for themselves!

tubby
08-11-1999, 12:33 AM
CDextHavn, very interesting point.

Now let's assume they are checks and they will be rounded to the nearest cent.

TheIncredibleHolg
08-11-1999, 04:01 AM
CK, I like that reasoning. It reminds me of the story (that you certainly know, but I'm telling it for the others) of a prisoner who was sentenced to death. The judge said he would be executed in the following week (Mon - Fri). He wouldn't be told the exact day in advance (to make it less cruel), but the execution would take place at noon on the respective day. The prisoner thought:

(a) "They can't execute me on Friday, because if they don't come for me Thursday noon I'll know the day in advance which they said I wouldn't."

(b) "They can't execute me on Thursday, because if they don't come for me Wednesday noon, I'll know it's Thursday (Friday being impossible due to (a)), and they said I wouldn't know in advance."

(c) "They can't execute me on Wednesday due to (a) and (b)."

(d) "They can't execute me on Tuesday due to (a), (b), and (c)."

(e) "They can't execute me on Monday due to (a), (b), (c) and (d)."

(f) "So they can't execute me that week, but they said they would!"

With the verdict logically flawed, the prisoner thought he had found a chance to file an appeal and challenge the sentence. Maybe he could get a new trial, a better lawyer! Maybe he'd be free soon!!

Sadly, his appeal was dismissed. The prisoner was executed at noon on Tuesday, and he didn't know the day in advance, just like the judge said.

C K Dexter Haven
08-11-1999, 07:34 AM
Yes, Holg, exactly the same paradox (even though no actual doctors are involved in the statement of the problem.)

TheIncredibleHolg
08-11-1999, 07:57 AM
How would you know, CK? Actually, I'm working on my PhD in CS. Can I play too?

december
08-11-1999, 07:59 AM
CK -- good example. It shows that one cannot assume that all pairs of form X, 2X were equally likely. Furthrermore, it provides some sort of a basis to guess at the probabilites before an envelope was opened.

AuraSeer
08-13-1999, 03:32 AM
december, sorry, but you're wrong. Switching envelopes does not change the outcome in any way.

I just wrote a little computer program that simulates this problem. It uses the following algorithm:
1) Randomly pick a number of dollars, and put that amount into envelope one.
2) Put twice as much money into envelope two.
3) Randomly select one of those envelopes to be given to the (AI) player. Call this envelope A; call the other one B.
4) If the player is following the "always kee p" rule, add the contents of A to his score. If he is following the "always switch" rule, add the contents of B to his score.
5) Repeat.

When the two strategies is compared, the totals are very close. I've just done a few runs of several hundred thousand trials apiece, with a maximum dollar amount of \$200; in every case, the difference between the totals is less than \$2 per trial. Neither strategy is consistently higher.

Whether or not you ever switch envelopes, you'll end up with about the same total over the long term.

(And just think, I didn't even need a PhD to figure this one out.)

------------------
I'm not a warlock.
I'm a witch with a Y chromosome.

AuraSeer
08-13-1999, 03:39 AM
A correction of my last post:
In each run of more than 100,000 trials, each strategy's total was within \$100,000 of the other. This is true even for my most recent run, consisting of 1,000,000 trials. As the numbers get higher, the percentage difference between the totals is getting smaller, as is to be expected.

The "\$2 per trial" line that I wrote above doesn't make sense, sorry. Chalk it up to lack of sleep.

TheIncredibleHolg
08-13-1999, 03:57 AM
So, AuraSeer, what have you proved? That without opening the envelope, switching doesn't make any difference? I thought we had agreed on that.

Try changing your program so that the player opens the envelope. If he finds \$1, he switches; if he finds \$200 (or whatever the upper limit), he keeps the envelope; in between, he may always switch or always keep.

Average winnings will increase because if you have any knowledge (however vague) of the range of possible amounts, you can exclude some possible combinations. My rule refers to the limits of this range; CK's example concerned other limitations such as the amount always having full dollars (so that \$99 can never be the "2X" envelope).

C K Dexter Haven
08-13-1999, 08:01 AM
My limitations using whole dollars was arbitrary, you'd get the same result using whole cents (If you open an envelope with \$.33, you know it must be the low.)

If you say everything is rounded to the nearest cent, you'd still know that if you open an envelope with \$0.01, you should switch (the other envelope would either also have \$0.01 or would have \$0.02), so the same idea works, but cent by cent rather than dollar by dollar.

Aura, the question in your program is partly whether the contestant KNOWS the upper and lower bounds. If you know the upper bound is \$200, and you open an enevelope with \$102 in it, you know it must be the maximum -- the other envelope must contain \$51, it can't contain \$204.

december
08-13-1999, 08:29 AM
AuraSeer -- Holg is right. Your example can be used to illustrate the Bayesian case. The recipient's probabilities and expected value depend on your chosen original probability distribution. When you "randomly" picked a number of dollars, you actually made that selection from your particular pre-determined set of possible values. Had you begun with a different set of values, the answer would have been different. The recipient can improve her odds to the degree that she can guess the shape of your original distribution. She can use her judgment and do better than 50-50.

Speaking of 50-50, here's a quote from GOTCHA by Martin Gardner:

"The "principle of insufficient reason" which the economist John Maynard Keynes renamed the "principle of indifference" in his famous Treatise of Probability, can be stated as follows: If we have no good reasons for supposing something to be true or false, we assign even odds to the probability of each truth value.

"The principle has had a long and notorious history... [Gardner gives several examples of mis-use of this principle.]

"The principle has legitimate application in probability, but only when the symmetries of a situation provide objective grounds for assuming probabilities to be equal. For example, [flipping] a penny..."

AuraSeer
08-14-1999, 03:14 AM
Okay, let me see if I have this right. Are you saying that the player knows the upper and lower limits of the money he could be offered? I didn't think this was part of the setup.

Perhaps I've missed something important in the discussion here. I thought the whole argument about expected values and so forth was on whether it was advantageous to always switch once you've seen the money in the envelope.

A player who knows the possible range of dollar amounts will of course end up with more money than an ignorant player. That much is so obvious that it's barely worth mentioning. If that's what this whole thread has been about up to this point, I'll be very disappointed.

------------------
I'm not a warlock.
I'm a witch with a Y chromosome.

C K Dexter Haven
08-14-1999, 07:01 AM
<< A player who knows the possible range of dollar amounts will of course end up with more money than an ignorant player. That much is so obvious that it's barely worth mentioning. >>

Yes, but I think that's really the question deep down, although it's more complex than that.

- If the game includes all possible X and 2X in the real numbers, then it's a probability theory game (50/50) and the contestant has no new information from opening the first envelope.

- If there is some slight edging -- for example, if the number of dollars is an integer -- then some combinations of X and 2X will give the contestant new information from opening the first envelope. For instance, as noted, if the first envelope contains an odd number of dollars, then it must be the smaller amount. Or if an envelope contains \$10,000, you know it must be the larger amount. Thus, there are some situation where opening the first envelope gives information that may improve the odds from 50/50.

I haven't read up on Baysian probabilities in quite a while, but I have to think that in this situation, the number of such modificiations is miniscule.

In any expected value problem, one must look at a large number of trials (my earlier comment to this effect was not irrelevant, december). Most of the simulations have assumed one contestant plays the game many times, with X and 2X varying. My approach was different: I assumed that there were many, many participants playing independently; and that X and 2X were always fixed (such as, always \$8 and \$4, for every contestant.)

Viewing the problem this way eliminates the subjective aspect but leaves the essential "paradox" that the true expected value is 1.5X, but that the contestant on opening the first envelope must estimate the expected value at 1.25 * (content of first envelope).

TheIncredibleHolg
08-16-1999, 03:09 AM
AuraSeer wrote:Okay, let me see if I have this right. Are you saying that the player knows the upper and lower limits of the money he could be offered? I didn't think this was part of the setup.The setup is not "standardized". This seems to be part of the problem because people on this thread have been talking about slightly different things. The favorite setup is a gameshow scenario in which I think you would have some idea of the possible range of prizes.Perhaps I've missed something important in the discussion here. I thought the whole argument about expected values and so forth was on whether it was advantageous to always switch once you've seen the money in the envelope.Always switching after seeing the money would have the same effect as switching without ever seeing the money.A player who knows the possible range of dollar amounts will of course end up with more money than an ignorant player. That much is so obvious that it's barely worth mentioning.Personally, I think the opposite case is obvious: Always switching (with or without seeing the money) cannot give you any advantage because it's just like having picked the other envelope in the first place, and assuming there is no bias in the first choice, the expected values must be the same.

I suppose both cases are pretty obvious to common sense. The point of the problem is that the calculation of expected seems to show that switching is advantageous. In fact, you are led to believe that you should not just switch once for every pair of envelopes; you should switch the same pair of envelopes back and forth again and again (if you're allowed to) to increase your expected winnings each time! This is obviously wrong - the question is: Why? It's a bloody paradox!If that's what this whole thread has been about up to this point, I'll be very disappointed.So was Cecil (posted 08-08-1999). Me, I was at least surprised at the longevity of the thread. I had contributed my thoughts in May. I suppose this paradox is just very hard to see through.

C K Dexter Haven
08-16-1999, 08:02 AM
Said succinctly, Holg, the nature of the paradox is:
(a) the expected value is clearly the halfway point between the two amounts.
(b) the contestant believes the expected value to be 1.25 x (the amount in the first envelope that he selected and opened), because he has insufficient information about the contents of the envelope (that is, about the distribution of amounts in the envelopes.)

The whole point of the relevant bits of debate hinges around that question, and that leads to the question of what he should assume to be the possible amounts in the second envelope.
- The paradox hinges around the contestant, finding amount M in the first envelope, thinks that 0.5M and 2M are equally likely to be in the second envelope. The true distribution, of course, is that one of those has zero chance and the other has 100% chance of being in the second envelope.
- The question then becomes, what distribution should the contestant assume? And that's where knowing something about the limits will help eliminate some possibilities and help him get a better estimate of the true distribution. For example, if he knows that \$1000 is an upper bound, and he opens an envelope with \$700, then he has good reason to think that the other envelope contains \$350 and not \$1400... thus, helping him assess the true probabilities.

So you are incorrect, Holg, in saying that opening the first envelope doesn't give information. It could give information, if there are limits on the payout amounts. That's the hinge of this debate. To the quizshow master, who knows the amounts in both envelopes, the expected outcome is clear. To the contestant, the expected outcome is not clear because he doesn't have enough information; he therefore assumes a distribution as the best estimate he can make, and proceeds from there.

TheIncredibleHolg
08-16-1999, 10:04 AM
CKDextHavn wrote:Said succinctly, Holg, the nature of the paradox is: ...Hey, don't tell me. I got it!So you are incorrect, Holg, in saying that opening the first envelope doesn't give information.I never said that. I just said that if you open the envelope and then switch in any case, no matter what you found (AuraSeer's approach), you wouldn't have had to peek in the first place because you didn't let it influence your decision.

I think the two of us are pretty much in agreement. Just look back at the first postings from three month ago.

C K Dexter Haven
08-16-1999, 11:15 AM
Yeah, sorry, Holg... time is at a premium, and I didn't re-read the old posts. And sorry if I misread your recent comment.

Anyway, I think this one is finally put to bed?

glee
08-16-1999, 04:19 PM
I hope you don't mind if I explain why I posted a few times on this thread (and still am!).
I agree that if there is extra information to be gained by opening the envelope (e.g. you're on a quiz show called 'The \$64,000 question' and the envelope contains \$64,000!), then the probabilities change somewhat.
I did always specify that I was stating that there was no such background information to be gained.
I then had difficulty following December's logic e.g.

'3. If one assumes that the probability is always 50-50, then a mathematical contradiction occurs. The contradiction is precisely that you would both simultaneously be right. That is, you could prove that switching doesn't help and also that it does help.'

'"The conclusion that trading envelopes is always optimal is based on the assumption that there is no information obtained by observing the contents of the envelopes. From a Bayesian perspective, the key to a successful analysis is in recognizing the potential information to be gained from the observation."'

I don't follow either of the above - am I missing something?

TheIncredibleHolg
08-17-1999, 03:37 AM
CK, I'd love to put this one to bed, but I have my doubts. If only I were a moderator so I could abuse my power and close this thread!

glee, if you don't know the probabilities of different envelope contents, that doesn't mean they're equal. It may seem to you that .5X and 2X are equally likely, but they don't have to be. It's then a problem of incomplete information, and stochastics can't help you much.

Also, I have doubts whether it would even be possible to contruct a uniform distribution of probabilities over an unlimited (albeit discrete and enumerable) range of values. What would be the expected value of that?!

glee
08-17-1999, 03:32 PM
Dear Holg,
I think you're being a bit hasty in saying you'd like to close the thread. After all, this is the 'Straight Dope' - founded to push back the barriers of ignorance (and a good thing too!). I'm unclear about something and I'd be grateful for help - why cut me off?

OK, first have I understood your last post? Are you saying you can't realistically construct a truly random distribution using money values?
In any case, I'd still appreciate an explanation of two of December's statements:

'If one assumes that the probability is always 50-50, then a mathematical contradiction occurs. The contradiction is precisely that you would both simultaneously be right. That is, you could prove that switching doesn't help and also that it does help.'
(How do you prove this contradiction?)

'"The conclusion that trading envelopes is always optimal is based on the assumption that there is no information obtained by observing the contents of the envelopes. From a Bayesian perspective, the key to a successful analysis is in recognizing the potential information to be gained from the observation."
(I don't get the first sentence - if you assume no information gained, why is switching optimal?)

sendos
08-17-1999, 04:41 PM
Here's an attempt at a formal description
(and solution) of this problem:

Assume X is uniformly distributed between
0 and M, where M is some deterministic
number.

Assume we have two envelopes: A and B

Assume a fair coin is tossed. If we get
heads, then X is put into envelope A and
2X is put into envelope B.
If we get tails, 2X is put into envelope
A and X is put into envelope B.

Assume we pick up envelope A, open it
and see amount Q in it.

Should we switch or not?

First, let's assume that we know the
value of M, ie the maximum value for X

In this case, if Q > M, we know that Q=2X
and we should not switch.
On the other hand, if Q <= M, we know that
Q=X and Q=2X are equally likely and thus
must switch.

So, if we know the value of M, it doesn't
matter how large M becomes:
opening the envelope does give us some
information about X, and we should act
accordingly.

Now, what happens if we do NOT know M ?
Then, of course, we do not know whether
Q > M or Q <= M, which means we cannot
make a formal decision whether to switch or
not.

So, if we don't know the value of M,
opening the envelope does not give us any
useful information, and does
not enable us to make an informed decision
based on the amount found in envelope A.

Therefore, whether seeing the amount
in the envelope helps us decide to
switch or not, depends on how much

The above analysis (like most of the
discussions on this thread) assumed that
X was random. What would happen if we
assumed X to be deterministic but unknown?

I have heard that in Bayesian analysis,
there is a concept of an "improper prior",
which basically enables Bayesian analysis
of problems with deterministic quantities.
So, if we take X as being deterministic,
and not as random, maybe an "improper prior"
analysis will tell us what the switching
strategy should be.

Based on the results from the analysis in
this message, I would venture to
guess that if X is deterministic but
unknown, opening the envelope will
not aid us in deciding whether to switch
or not. If anyone on this thread is familiar
with the concept of "improper priors",
maybe they can hekp answer this.

Sendos

TheIncredibleHolg
08-18-1999, 03:00 AM
glee, I was kidding about closing the thread! I don't want to cut you off. But I have to say I can hardly think of any more ways of explaining what I mean.OK, first have I understood your last post? Are you saying you can't realistically construct a truly random distribution using money values?I think it might be impossible to contruct a random distribution that is (a) uniform and (b) of unlimited range. I can easily think of a uniform one for a limited range (\$1 - \$n) or of a non-uniform one for an unlimited range (e.g. decreasing exponentially). These would all imply that there is some expected value that a contestant could aim for if he knew or could guess it.

To paraphrase: The conclusion that .5X and 2X are equally likely for the other envelope if you picked the one containing X could only be valid (in my opinion, anyway) with a uniform, unlimited distribution. If that doesn't exist, the paradox is resolved.

I should make clear that I'm not sure about the non-existence of that distribution. This may just be a lack of imagination on my part. And, of course, this has nothing to do with whether we're talking of money or anything else.In any case, I'd still appreciate an explanation of two of December's statements:
'If one assumes that the probability is always 50-50, then a mathematical contradiction occurs. The contradiction is precisely that you would both simultaneously be right. That is, you could prove that switching doesn't help and also that it does help.'
(How do you prove this contradiction?)The contradiction is obvious: One way of calculating shows that you can expect a gain from switching, the other shows you can't. I think you're overinterpreting this one.'"The conclusion that trading envelopes is always optimal is based on the assumption that there is no information obtained by observing the contents of the envelopes. From a Bayesian perspective, the key to a successful analysis is in recognizing the potential information to be gained from the observation."
(I don't get the first sentence - if you assume no information gained, why is switching optimal?)I'm not quite sure what december means here. My guess is the following: The conclusion that switching is always optimal means that your decision does not really depend on what you found in the envelope, i.e. opening the envelope did not give you any information to base that decision on. I've always stated that this can't be true (and I think december means the same): Given that the distribution of possible values is not uniform and unlimited (see above), seeing the amount in the envelope will always give you a clue as to whether you should switch or not. What may be true, however, is that you (as a contestant) have no idea of the distribution. In that case, you just can't make an informed decision and switching or not doesn't matter.Thanks for your forbearanceNo sweat, that's what we're here for. As I said, I was kidding earlier. But I am running out of paraphrases...

glee
08-18-1999, 04:59 AM
Holg,
I've got it!
(Thanks)

sendos
08-18-1999, 06:28 AM
Quote:
> I think it might be impossible to contruct
> a random distribution that is (a) uniform
> and (b) of unlimited range.

This is precisely what the term "improper
prior" describes. This improper distribution
has been used by statisticians for a while
now. Unfortunately, that's about all I know

Maybe it has some usefulness for the envelope
problem.

sendos

C K Dexter Haven
08-18-1999, 08:07 AM
Sendos, your conclusion doesn't follow from your analysis. You say: << In this case, if Q > M, we know that Q=2X and we should not switch. >>

Agreed. Similarly, if there is a minimum value m (for instance, \$1), we would have information on the other side: if Q < 2m, then we know that we have opened the low value Q=X and we should switch.

But you go on: << On the other hand, if Q <= M, we know that Q=X and Q=2X are equally likely and thus must switch. >>

That's an incorrect conclusion. We have opened the envelope, found Q. We do NOT know whether Q = X or Q = 2X; we know they are equally likely outcomes. But why does that mean we must switch? If Q = 2X, then switching lowers our value; if Q = X, then switching raises our value. The expected value is 1.5X (but we don't know what X is and so we don't know the value for 1.5X; it could be more than Q or could be less than Q.)

TheIncredibleHolg
08-18-1999, 09:26 AM
sendos wrote:This is precisely what the term "improper prior" describes. This improper distribution has been used by statisticians for a while now.It is? Dang, I hadn't made that connection. I must admit I'm not familiar with the term.

So, if statisticians use it, what does that mean? Is it a useful tool? Or do they use it wrongly? In any case, I assume it's something you have to be very careful with. (Don't walk around pointing it at people :) )

liverTwist
08-19-1999, 02:47 PM
I think I have the best solution....

1 envelope (1x) is better than no envelopes (0x). So be happy with whatever you get - assume that the amount in the envelope you took is higher. By switching, you have as much chance to lose as to gain; since you have already gotten money for nothing, why risk it?

------------------
"Minds are like parachutes; they work best when open."

-Lord Thomas Dewar

glee
08-23-1999, 03:46 PM
December,

What is a countable infinite set?

december
08-23-1999, 04:03 PM
Glee --

A countable infinite set is a set like" the positive integers, {1, 2, 3, 4,...}.

An example of an uncountable infinite set is the set of real numbers.

december
08-24-1999, 12:25 AM
Holg -- You're right.
There is no probability distribution on a countable infinite set of elements with equal probability on each elements. (Proof: Note that the total probability must be 1. No matter how small the probability on each element, when you sum a sufficient of them, the total probability would exceed 1. If the probability of each element were zero, then the total would be zero.)

Therefore you cannot assume equal probability for all pairs of form X, 2X. As you point out, that disposes of the contradiction.

If you can't assume that all amounts were equally likely, then what SHOULD you assume?

Your assumption ought to be based on the amount of money you saw as well as any other information you have. In particular, you would know how you had been informed about the money in the other envelope.

There does not seem to be a single clear answer, although some assumptions seem more reasonable than others....

TheIncredibleHolg
08-24-1999, 03:01 AM
I can't believe it!! The contradiction... solved? I mean - what do I do now?! No endless discussion anymore, nothing to spend my time at work! I'll be so... lonely...

I'll miss you all! (Sob!)