Deadly Sixes Game: Explain the apparent probability paradox to me

I just watched a video on this. The video presented the game and the apparent probability paradox but didn’t attempt to resolve it.

Here’s how the game works. A group of people enters a room (starting with a single person). A pair of dice is rolled.

If anything other than a pair of sixes is rolled, everyone in the room is given a million dollars and leaves. A new group is brought into room and the game is played again. An important point is that each new group is ten times as large as the previous group.

If the result is a pair of sixes, poison gas is released into the room and everyone dies. The game is then stopped.

For purposes of this scenario, assume there is an infinite supply of people, money, poison gas, and space in the room. Also nobody plays the game more than once. Surviving and getting a million dollars is winning; dying is losing.

So what are the odds if you’re playing?

One way is to figure there are thirty-six possible results from rolling two dice. And only one of them will kill you. So you have a 97% chance of winning and a 3% chance of losing (rounded to the nearest percentage point). That seems like pretty straight forward math.

But now look at some possibilities. Here are the number of people that enter the room in each round:

1: 1
2: 10
3: 100
4: 1000
5: 10000
6: 100000
7: 1000000

Let’s say the double sixes are rolled in the first round. That means one person played the game and lost. 100% of the people who played were killed.

If the double sixes were rolled in the second round, then a total of eleven people played and ten of them died. Which means 91% of the people who played were killed (rounded off to the nearest percentage).

If the double sixes were rolled in round seven, then a total of 1,111,111 people played and 1,000,000 of them died. Which means 90% of the people who played were killed.

No matter how many rounds are played, you always will have at least 90% of the total amount of players in the final group which died. Which means your chances of winning is less than 10%.

It’s not really a paradox. It’s looking at different probabilities and implies they’re the same thing, but it’s an apples and oranges comparison.

  • Scenario 1: you are playing the “game” once, so your odds of winning are 97%.
  • Scenario 2: you are a member of a population where everyone played the game and the final group lost, so the odds you were a winner are <10%. Note that this is possible to calculate only at the conclusion of the game when the population size is known.
  • Scenario 3: you are a member of a population where people will play the game until it ends. Because you don’t know how many rounds will be needed, the population size is infinite and probabilities don’t make sense.

Those are different things, and you can’t compare them. It’s a fun problem - it reminds me of the tipping riddle with the extra $1.

I’m not following your point. You are a person playing the game once and a person who is a member of the population playing the game. You are both of these things simultaneously. And you are looking at a single question in both roles; what are your odds of winning or losing?

I don’t understand how you can have one answer to this problem is you ask it in in one role and a different answer is you ask it in the other role. It’s still a single answer applying to a single situation, whichever role you assume when you ask the question.

It’s not that you are in two different roles and get different answers depending on the role. It is two different questions.

What are the odds that someone who enters the room will emerge a winner? 97%.

Once the game is complete (meaning a group has lost), what are the odds that Person A was in a winning group? Less than 10%.

The riddle is trying to blend those two questions into a different one: what are your odds of winning if you in a population that will all be playing the game until it ends? This doesn’t have an answer. The odds are unknowable, because depending on how many rounds it takes, you might win, you might lose, or the game might end before you get a chance to go in the room. You don’t know how big the population is - that’s only known after the game is complete.

I’m still not seeing it.

Every player is facing the same situation and therefore the same probabilities. So how can you reconcile saying that each player has a 97% chance of winning but less than 10% of the players will win?

I feel that the resolution is that one of the two answers isn’t correct but I don’t see why either one is incorrect.

One is a question about an independent roll of the dice, and one is about the final result of multiple rolls chained together.

You can simplify it by just looking at the dice rolls.

What are the chances that the first roll is something other than double sixes?

35/36

What are the chances that the second roll is something other than double sixes?

35/36

What are the chances that the fifth roll is something other than double sixes?

35/36 or about 97%

What are the chances that there will be 5 non-double-six rolls in a row?

About 54 million/60 million, or around 88% (I did some rounding.)

Those are different probabilities because you are looking at different things. It could be the thousandth roll, and the chances it will be a non-double-six roll are still 35/36. The chance of getting 1000 non-double-six rolls in a row is about 0.000000000006%

Each round is entirely independent of every other round. A person participates in only one round, so their individual risk is 1/36.

The population participates in a series of related rounds, and they are not independent of each other. If a round doesn’t result in 6/6 then another round occurs. Looking at the probabilities for a population subjected to a series of interconnected tests is different.

Also, suppose your population is 1 million people. If the 3rd round results in 6/6 then 111 die and 999,889 survive. Do you consider those odds?

Your starting population has to be unlimited since you keep playing until 6/6 comes up. It’s kind of a Martingale problem. You’re likely to run out of people on the planet before you hit.

“Chance of winning,” without any other qualification, is ill-defined.

Suppose I ask you, “What is a person’s chance of winning the lottery?” The answer will be different depending on whether you interpret it to mean "What is the chance that a person who buys a lottery ticket will win?’ vs. “What is the chance that a randomly selected person will win, without knowing ahead of time whether that person did or did not buy a ticket?”

The problem is that the expected value of the number of people who will die, and the expected value of the total winnings, are both infinite. Trying to treat infinite quantites as ordinary numbers often leads to paradoxes.

That’s why I’ve focused the question to the odds that apply to a player. A player’s outcome is only affected by one dice roll.

I thought I had described the situation enough to allow people to determine the relevant odds. What details do you feel needs to be added before the odds can be determined?

Then you have to disregard all other rolls and results. The player has the same chance as their own group, and all groups have the same chance.

Looking at all players together is chaining multiple results.

You did specify the situation fully. And as long as the supply of money and victims are both infinite, the odds cannot be determined. This shouldn’t be surprising, because in any real-world situation, neither of those quantities can be finite.

The problem is that your two different probabilities are answers to two different probability questions. You have to be very clear about what question you’re asking.

In what I’ve quoted above, the second sentence does not follow from the first. Or, the only way it does is if you interpret the second sentence to mean “If you are one of the players chosen at random from all the people who play the game before it ends, what is the probability that you will be one of the winners?” Which is a different question than “If you are a player, what is the probability that you win the particular round that you are playing in?”

To be clear, the problem is not that you did not sufficiently describe the situation, but that you did not sufficiently clarify exactly what probability question you were asking about the situation.

You are playing this game. What are your odds of winning?

How do you define playing the game and winning? Is it walking into the room and coming out alive? That’s 97%. Is it being a part of the population that will maybe walk into the room, depending on what happens to the earlier groups? That can’t be determined.

To be clear, you have defined the situation well. You have not defined what question you are asking.

I don’t feel this is true. You can, for example, determine the odds of dealing a straight flush even though theoretically you could deal yourself an infinite number of hands and never have a straight flush. Determining the probability of something happening is not the same as determining what will happen.

Chance of winning, given that you are currently playing the game and the dice have not yet been rolled: 35/36.

Chance that you won, given that you were in the game and the game is still ongoing: 1.

Chance that you won, given that you were in the game and the game has ended: limit is 1/10.

A well-defined probability question typically isn’t just “What is the probability that A will happen?” but “What is the probability that, if you do A, then B will happen?”

A probability experiment has to have well-defined outcomes, so that whenever the experiment is performed, the result is exactly one of the possible outcomes.

In one case, your probability experiment is, essentially, rolling a pair of dice, and you are interested in whether or not double sixes come up.

In the other case, the experiment is selecting one person at random out of all the people who “play the game,” and the outcomes involve the person being chosen from the first group, or the second group, or etc.