Monty Hall Problem

Hi PHB, while Manduck has indeed made his point 100000000 (or so) times, he is still mistaken !

The very nature of probabilities is that they deal in hypotheticals, but are still useful as predictors in the rea

oops !

PHB

…real world.

If you try the 2 Monty experiments as set out, you will find that Nightime’s probabilities are an accurate predictor and Manduck’s are not.

Experiment number 1: Monty knows which is the winning door, switching improves your odds to 2/3.

Experiment number 2 : Monty selects a losing door by chance, switching does not improve your odds of winning (odds will be 1/2).

This may seem counter-intuitive to you, but experiment will bear it out. You are saying that Monty does choose a losing door - this means that either his selection was not random (experiment 1, switch away), or that you are already in the subset of cases where Monty’s random selection gives a losing door (experiment number 2), and thus already deep into the world of theoretical probabilities !

As don’t ask just pointed out, what makes experiment 2 different is that Monty doesn’t know where the prize is …

Test it with coins !

So… if the ticket says “winner”, which it will do slightly more than half of the time (fake tickets always say winner, and real ones say it 1/1000000 of the time), you have a 1/2 chance of having won the lottery.

Which means that… according to your logic, slightly more than 1/4 of the times you play this game, you will win the lottery.

Yes, more than 1/4 of the time you play you will win the lottery, despite the fact that a real ticket only has a 1/1000000 chance of winning.
Wow.
I may have just stumbled onto a get rich quick scheme.
All I have to do is learn to make realistic replicas of lottery tickets that I know will be “winners”. Then I can take a fake ticket and a real ticket, and let a probability-challenged person choose between them.

If it is a “winner”, they will exclaim “There is a 1/2 chance that this ticket is worth a million dollars!”
Then I can sell it to them for fifty bucks.

I think I know where Nightime was going with this. Let’s rephrase the question in a different form:

I have two standard decks of cards, one with a blue back and one with a red back. With you out of the room, I pick a random card from the blue deck and place it face-up on the table. From the red deck, I pick the ace of spades and place it face-up on the table. I cover each card with a sheet of paper. Now I call you into the room and ask you to choose one of the cards. You do, and I flip over the sheet of paper to reveal…the ace of spades. (The other card is not revealed.) Now, what is the chance that the card you chose, the ace of spades, comes from the blue deck? Is it 50/50? After all, you had a 50/50 shot at choosing the blue deck card, right? This is exactly equivalent to the lottery ticket question, only with a 1/52 chance of “winning” instead of a 1/1000000 chance. And, if you don’t believe that… can I interest you in a little sporting proposition?

This is somewhat related to the modified MH problem under discussion, in that the type of information you know beforehand (i.e., one of the cards is certainly an Ace of spades, and the other may be with 1/52 chance) dictates how you set the odds once the last piece of information is revealed (i.e., the card you’ve actually chosen turns out to be the ace of spades).
.
.

But that most certainly is what the problem asks. What else would it be? You know that you are one of hundreds of people to play “Let’s Make a Deal.” You know (in this variation) that Monte picks his doors randomly (i.e., stealing nomenclature from above, there are six possibilities, AB, AC, BA, BC, CA, and CB, where the prize is behind door A, and your choice and Monte’s are the first and secon letter, respectively). You know that, in this case, he happens to pick an empty door, and thus you can discard the notion that you’re part of a trial where MH has chosen a prize door (a BA or CA trial, in other words). Most importantly, because Monte chooses randomly, his choice and the actual location of the prize are independant, and thus the relative probability of each of the remaining possible cases remains unchanged.

How is, in your view, this modified MH problem any different from the coin trial that Nightime proposed? If you believe that there is a fundamental difference, describe how we could run an experiment, using coins, that mirrors exactly the modified MH problem, and then let’s run it.

Let’s try this, then:

In Season One, Monty knows where the car is. The show runs for 24 episodes. In eight of these - one-third of the time - the contestant picks the right door first time and Monty has a choice of two doors he can open without ruining the game. In the remaining sixteen, the contestant does not pick the right door first time and so Monty does not have a choice. The optimum strategy is to switch.

In Season Two, Monty does not know where the car is. The show runs for 24 episodes. In eight of these, the right door is the one the contestant first picks. In another eight, the right door is the one Monty picks. In the remaining eight, the right door is neither of these. Now, the terms of the question state that Monty has not opened the right door, so we are clearly in the position of the contestant in one of the sixteen episodes in which Monty doesn’t ruin the game. In half of these the contestant had the right door in the first place and so switching doesn’t improve his chances (nor worsen them).

Supposing that you are taking part in an episode of Season Three and don’t know if the rules have changed… you should switch, as it wouldn’t have hurt under Season Two rules and helps under Season One rules.

Yes … just so long as you do know in advance that the Season Three rules will be the same as either the Season One rules or the Season Two rules.

But suppose Monty has (unknown to you of course) cooked up the following new rule for Season Three: if the contestant has picked the winning door, then open a losing door; but if the contestant has picked a losing door, then open the winning door. Two-thirds of the time, of course, Monty will open the winning door and you’ll lose - but when you are presented with the opportunity to switch, you shouldn’t.

Since my earlier posts I’ve read up a bit of background about the Monty Hall problem. The key point to my mind, which I’m trying to make here, is that it’s critically dependent on what you know in advance about the rules of the game.

If you know in advance that Monty will open a losing door, then obviously his selection is not random and your tactic is to switch. If you know in advance that Monty will open a random door, and that he might therefore open the winning door, then it doesn’t make any difference whether you switch. If you don’t know anything about Monty’s motivation or method, then all you can fall back on is that your initial pick had a 1/3 chance of being right. If you don’t have any more information, you can’t improve on those odds.

I’m off to do some more research and to find a probability-challenged person for Nightime to play with.

Zut, my understanding now of Nightime’s lottery ticket example is that one of the rules was that all fake tickets are winners, and only one real ticket is a winner. This wasn’t stated first time round.

In your example with the cards however, I think I must be missing something. You seem to be describing a one-off event, where all the parts are determined by human agency - more of a psychology problem than a probability question.

The player walks into the room - points at one of the cards - you uncover that card, it’s the A of spades (not sure why that’s relevant) - there’s another card next to it, which will never be uncovered (how’s that relevant ?) - assuming the player is aware that there are two decks, he now has to guess which deck you chose the A of spades from - surely this would be entirely at your discretion, and not subject to any probability calculations ?

Asteroide: I’m not sure why you think this is a psychology experiment; it seemed clear to me. Nonetheless, let me try again.

I’ve set up some rules for a game. You know exactly what the rules are.:

  1. I have two decks of cards, red-backed and blue-backed.
  2. I pick a random card from the blue deck.
  3. I obtain the ace of spades from the red deck.
  4. I place these two cards on the table in such a way that you can’t see either of them.
  5. You walk into the room, knowing full well exactly what the rules are as listed above. You don’t know which blue-deck card I chose (randomly, remember!), and you don’t know which card on the table is which, but you know the process by which I chose those cards.
  6. You pick one of the two cards on the table, and I reveal (this particular time) that that card is the ace of spades. You still don’t know what the other card is, nor do you know which is from the red deck and which from the blue deck.
  7. Now, knowing that the card you picked is the ace of spades, and knowing the rules I set up above, what do you suppose the chances are that the card you picked has a blue back? Is it 50/50?

No psychology here. We could run it once, or run it a million times; the question is: assuming you happen to pick up an ace of spades, what are the chances that it’s a blue card?

Note that this card example and the lottery ticket example are analagous: blue deck = real ticket, red deck = fake ticket, ace of spades = “you’re a winner!”

OK, I get it :

  • The red card is necessarily the Ace, and we both know it (not a random pick)

  • The blue card is a random pick - 1/52 that it’s the Ace of spades

I guess what I was missing was that the Ace was not a random draw, and that the player had all this info.

Agreed that it’s the same concept as the lottery, assuming all fake lottery tickets are winners.

The psychology aspect would come in if you were selecting cards at your discretion - which is apparently not the case !

Back to the experiment. Imagine if every time you tried that experience a different person was the one playing.

After the experiment you found that half of those persons playing won by switching. That is what the experiment says, right?

EVERY SINGLE of those persons were in the exact same situation as you are. What advice would you give them? To switch or not? Half the people will win if they switch. That means it’s a 50% probability, by definition.

But if you did the experiment with Monty not opening a random door 2/3 would win by switching. Of course, then you would advice them to switch.

This is also the situation you describe. Without knowing if Monty chooses a door by random or not you can’t know which situation you are in. Yes, he does exactly the same in both situations. But it’s what he does over time that matters. When dealing with chance you HAVE TO consider what will happen over time, because that’s in the definiton of probablity.

Basically, you’re saying that “random” doesn’t mean anything, in this context. It doesn’t change anything–but we know that that is not true. If Monty knows what he’s doing, he can always pick the prize door–if you don’t first. That would destroy any chance of you switching to it, which would mean that your chances of having the prize when Monty opens an empty door wuold increase to 100%.

Here is a simple Ruby program to demonstrate the whole thing.


# Define the doors.
# Their only property is that they are either open or closed
class Door
  # Open the door
  def open; @open = true; end
  # Close the door at the beginning of the game
  def close; @open = false; end
  # Is this door open?
  def open?; @open; end
end

# Define the player
# This is a simple player
class Poor_player
  # Our first choice is random
  def initial_choice; @initial = rand 3; end
  # Our second choice is to stand pat
  def second_choice; @initial; end
end

# Define Monty
# Monty knows where the prize is, and will tell you if you won
class Monty
  # Tell Monty at the beginning which one wins
  def whisper(prize); @prize = prize; end
  # Ask Monty whether the door is the right one
  def winner?(choice); choice == @prize; end
  # Monty chooses a door to open
  def open(choice)
    # If the player's choice is the winner, Monty can choose one of two doors
    # Otherwise, he has only one choice
    which = choice == @prize ? rand(2) : 0
    # Scan the doors
    $doors.each_with_index do |door, i|
      case i
      # If this is the door the player chose, or the winning door, don't open
      when choice, @prize
      else
        # Otherwise, possibly open this door
        if (which == 0)
          door.open
          return i
        # If we didn't open, we will have to open the next 
        else
          which -= 1          
        end
      end
    end
  end
end

# Simulate a game
def simulate
  # Start by closing all the doors
  $doors.each {|door| door.close}
  # Tell Monty where the prize is
  $monty.whisper(rand(3))
  # Tell the player to make his first choice
  choice = $player.initial_choice
  # Tell Monty to open a door
  $monty.open(choice)
  # Tell the player to make his second choice
  choice = $player.second_choice
  # Ask Monty whether the player won
  $monty.winner?(choice)
end

# Now we are ready to play
# Create Monty
$monty = Monty.new
# Create the player
$player = Poor_player.new
# Create three doors
$doors = []; 3.times {$doors << Door.new}

# Run 10000 simulations
sum = 0; 10000.times {sum += 1 if simulate}
puts "Poor player won #{sum} times."

# Give the player an improved rule for the second choice
def $player.second_choice
  # Scan the doors until we find one that isn't open, and that we didn't choose
  $doors.each_with_index {|door, i| return i unless i == @initial or door.open?}
end

# Run 10000 more simulations
sum = 0; 10000.times {sum += 1 if simulate}
puts "Smarter player won #{sum} times."


It shows the smarter player winning two-thirds of the time, consistently.

And if Monty’s choice is random?

Hi everybody, I’m back!

I haven’t posted since the weekend what with the real job and everything, but I have been continuing to think of this thread; the coins; the lottery tickets; the playing cards - and, uh, well, I changed my mind.

It was bothering me that the coin game came up 50-50, but I thought there must be some subtlety that would reconcile it with my analysis; I just have to figure it out. But I didn’t come up with anything. So that planted a seed of doubt in my mind.

Then I looked at the lottery ticket argument (yes, I was lurjing when I should have been working :)). The point of the lottery ticket example, I think, is that adding information during the game can affect your assessment of the probabilities of earlier event. I.e., when you make your random pick, there is a 50% probability that it’s a fake ticket, and 50% that it’s genuine. But when you scratch it off, you either find that it’s a loser or a winner. If it’s a loser, you know it must be genuine, so the probability that it’s fake is 0. If it’s a winner, the probability that it’s fake becomes 1999999/2000000 (if my calculations are right).

Okay, so how does that apply to the Monty Hall problem? I wasn’t sure that it did, but to try out the idea, I used the old trick of considering a 100-door variant of the game. What would happen if MH opened 98 empty doors after I made my pick?

Consider for a moment a situation where there are 99 doors, and one of them may or may not conceal a prize. You don’t know whether a prize is behind one of them or not. Now suppose you opened 98 of the doors at random, and none of them had the prize. What is the probability that 98 tries would come up empty if the prize was behind one of the doors? It’s 1/99. Of course, if the prize wasn’t there, the probability of getting 98 empty doors is 1.

So I’m playing 100-door Monty, and Monty opens 98 doors at random without finding the prize. I have to believe that there is very little chance that the prize is among those 99 doors. That gives me new information about the probability that I picked the prize in the first place. If I had picked one of the 99 losers with my first guess, there is only a 1/99 chance that he could have opened those 98 empty doors. That 1/99 is just the right amount to balance the 99/100 chance that the guess was wrong.

Now if Monty knew where the prize was and deliberately showed 98 empty rooms, that doesn’t provide any new information about the 99th one, because he can always open 98 empty doors whether the prize is among the 99 or not. So, the probability of my first guess being correct is not altered by that information, so I should switch.

In the random case, Monty’s door-opening does provide new, if incomplete, information about my original guess which has the effect of creating a situation where it doesn’t matter whether I switch.

… which is what Nightime and others have been saying all weekend. Congratulations, boys, you won me over. And please ignore all my posts from the previous two days :smiley:

[it was the lottery tickets that did it]

Hiya Manduck,

It was just such an approach that convinced me of the solution to the original Monty Hall problem when I first heard about it. I remember running it through my head one afternoon while I was playing cricket.

Now what? :smiley:

I can’t believe how stupid I’ve been

I mean, I’ve got a degree in maths from Cambridge University, and I took all the probability and statistics courses offered, and I only just missed getting a First. I’m not boasting - I’m trying to underline the extent of my stupidity here. Incidentally I saw the light yesterday, before seeing a similar recantation from Manduck. Looks like the penny dropped for both of us around the same time…

Here’s my new take on the lottery game advanced by Nightime. I originally said:

In my enthusiasm to apply intuition and “common sense” instead of proper analysis, I think I overlooked the fact that the chosen ticket is not a sort of 50/50 hybrid of a genuine ticket and a fake ticket - it is either definitely genuine or definitely fake. Before I scratch off the ticket, the only information I have to help me decide whether it’s genuine is the fact that I picked it randomly from a choice of two - i.e. a 1/2 chance that it’s genuine. But scratching off the ticket introduces new information. And that’s what changes the probability.

If there’s anybody out there who sees all this intuitively (unlike me), but doesn’t know how to do the mathematical calculations (which I do, at least when I’m not being stupid), then this might be interesting. The standard approach to calculation conditional probabilities (i.e. the probability of something happening, gien that some non-independent other thing has happened) is Bayes’ Theorem. This states thatP ( A | B ) = P ( A ) * P ( B | A ) / P ( B )where P ( A | B ) means the probability of event A happening, given that event B has happened.

Applying Bayes’ Theorem to the lottery ticket game, let’s suppose there are 1,000 genuine tickets (I could say 1,000,000 but I’m sure I’d get bored typing all the zeroes repeatedly); then let’s say W denotes the event of me picking a winning ticket and G denotes the event of me picking a genuine ticket. So we want to know P ( G | W ), i.e. the probability that my ticket is genuine, given that it is a winning ticket.

We know that
[ul]
[li]P ( G ) = 0.5, because I had a 50/50 pick between the fake ticket and the genuine ticket;[/li][li]P( W | G ) = 0.001, because there are 1,000 genuine tickets and only one is a winner; and[/li][li]P( W ) = P ( W | G ) * P ( G ) + P ( W | notG ) * P ( notG ) = (0.0010.5)+(10.5) = 0.5005, because I had a 50/50 chance of picking a fake ticket which is definitely a winner, and a 50/50 chance of picking a genuine ticket which has probability 0.001 of being a winner.[/li][/ul]
So plugging this into Bayes’ Theorem gives P ( G | W ) = 0.000999. Not 0.5.

I’d like to apologise to Nightime, Asteroide, Malacandra, GreyWanderer and everybody else whose time I’ve wasted, and I’d like to thank you all for your patience in the face of my stupidity.

But a final comment, if you’ll allow… what does this actually say about the Monty Hall problem? Not very much, is what I think. Obviously the mindset of the Bayesian approach is useful, and the value of identifying how much new information is made available at various stages of the process is clear. But at the end of the day, it comes down to what you know about the rules of the game. If you know Monty will open a losing door, you should switch. If you know he’ll open a random door, switching and not switching are equally good. But how much do you know about what Monty will do? Why not ask Monty Hall himself…

The following extracts are from “Behind Monty Hall’s Doors: Puzzle, Debate and Answer” in The New York Times, Sunday, July 21, 1991 which I found referenced here. The entire New York Times article is available in the Internet Archive here .

I have to admit that this argument started when I proclaimed, many posts ago, that it doesn’t matter whether Monty’s choice is random or not.

Then, as expected, there is no profit to switching, and Monty spoils 1/3 of the games.

For those who maintain that Monty’s motivations don’t come into play, all you need to know is that in this instance, he did show you an empty door and offer the switch, I have an example.

Let’s say you’re walking down a city street and come upon a game of three-card monty. The street hustler moves the three cards around very quickly, and you select which card might be the ace. The street hustler then turns over one of the cards that you didn’t pick, which is not the ace, and offers to let you switch your guess.

Should you switch? You’d be a fool to switch. I can calculate that your probability of winning would approach zero if you did. Can you see why? Can you see that the only difference between this, and the Monty Hall problem is what assumption we make about the host’s motivations?

This would depend, of course, on whether Monty has free will :slight_smile: