Deadly Sixes Game: Explain the apparent probability paradox to me

If I’m one of the people who has played the game and I’m asking about it, then by the rules that means I won.

And if I’m somebody who is planning on playing the game, then my probabilities may be affected by the possibility that the game may end before it’s my turn.

So let’s say I’m currently in the middle of playing the game. (The room has good wifi.) I’ve entered the room with some unknown amount of other people (I can’t tell how many, if any, because the room is infinitely large and we’re spread out. So I might be alone and in the first group or maybe there are trillions of other people in here with me.) The door has been closed and locked so I can’t leave but the dice haven’t been rolled yet. And I’m belatedly wondering if this was a bad idea and what my odds are for winning or losing - which are now the only two possibilities I face.

I’m going by the title, which is “explain the paradox”. So I was focused on the 97% vs 10% difference the OP called out.

The answer to this is unequivocally 97%.

If I understood your game, before the dice are rolled you have a 35/36 chance of survival (better than Russian Roulette…)

One general problem that comes up with hypothetical games where the expectation is infinite is that in the real world nobody would host such a game, nor have an infinite amount of money to pay punters off if they did. Conversely a game may be to your advantage but only if you have billions of dollars (and time) to keep playing and take advantage of that advantage.

In that case, you have a 35/36 probability of winning. It’s irrelevant what happened before you enter the room or what will happen after you leave.

And here’s the paradox. There’s nothing special about me. The odds for every other player in the game are identical to mine.

So how do 90% of the players get killed in a game in which the players have a 97% chance of surviving?

I am going to have to recommend a nice game of bassette or baccarat instead of your various Most Dangerous Games. Playing with the right crowd you might win more money, too.

You claim there is a paradox or contradiction, but there is no law of large numbers or anything else applicable that would make those numbers equal to each other. For that matter, say you play a single flip of a coin, and lose. Then 100% of the players lost a game which they had a 50% chance of winning. (Something similar can happen in your game)

Good catch. I think you’re right, and I likely made an error on my scratchpad.

Different priors, which is why I made them explicit.

Chance of winning, given that you are currently playing the game and the dice have not yet been rolled: 35/36.

Chance that you won, given that you were in the game and the game has ended: limit is 1/10.

The underlined clauses are critically important. In the first, you’re still in the room and looking at the chance you’ll win. In the second, your fate has already been determined and we’re post-facto computing the chance you were in a winning room.

To be blunt: different situations are different.

I think you just used the wrong term. The expected number of rolls is 36, but the maximum number of rolls is infinite.

Yeah, that’s it! (But no, I was working it out by hand, easy to make mistakes while summing power series.)

The death bit is a distraction. Let’s say instead that the losers just lost 1 million dollars each, so they’re still available to be interviewed afterwards.

Which, I think, helps clarify the problem considerably, because it turns it into one that I think most people (at least, most people here) are familiar with: The organizers of the game are running a Martingale scheme. They’re OK with most likely losing in any given round, because eventually they’ll win, collect a million each from all of the last round of players, and more than make up their losses.

But that only works if there’s always a next round. Which there isn’t, because eventually, one way or another, you’ll run out of suckers. And if the organizers of the game get to that point before they get lucky, they lose very big.

The increasing number of people is a red herring for your question. It’s just a way of making the probability that is based on multiple dice rolls look extremely counterintuitive.

Think about this:

Imagine only you are playing the game.

For each round, you get to choose whether to play again.

Each separate time you make that choice, your odds of winning/surviving are 35/36, or about 97%.

But, if you ask a different question, like, what is the chance that you’ll still be alive after 100 rolls, the chance is going to be infinitesimal. But if you’ve already survived 99 rolls, your chance of surviving roll 100 is 97%.

Doctor Strange? Loki? Everything Everywhere All At Once? The multiverse is hot. And it means you never run out of people.

Another way to think about it: by specifying that you always enter the room at some point in the game and the final round kills everyone in the room, it’s the same as saying there is no final roll. The last group is automatically killed.

So take the simple case of 11 people. If you are offered the option to play the game and be the first person, your odds are good. If you are offered the option to play but the person in group 1 will be randomly chosen, you obviously wouldn’t do it. Extend that to any number of groups you want, and as long as the rule is the last group gets killed, it’s always a bad choice if you might be in the last group.

But in the question as posed, it’s an infinite number of groups, so there’s no predefined sure death. Any one round has good odds. Yes, a bunch of people will die at the end, but there are still an infinite number of survivors so you have infinite odds of being one of the survivors of that population. Obviously, that doesn’t make sense because probabilities are meaningless when you’re dealing with infinite populations. But maybe it helps show why the problem as stated doesn’t lend itself to thinking in common sense terms.

No, your chance of being alive after one hundred rolls is just under six percent. Which is slim but hardly infinitesimal.

And you’re dealing with infinities, which are going to make your probabilities all wonky.

I rather like DPRK’s answer here:

Do you see a similar paradox in this example?

Yes, you’re right.

Nevertheless, it shows why it can be true that you have a 97% chance of winning any one round, but when you start looking at multiple rounds together, something different happens.

When you compare your single round odds to all of the people playing in any round, you are looking at multiple rounds together, not a single independent round.

Probability is pretty funky…and you can run into paradoxes like this based on your stopping point in playing such a game. Another example is a method that a friend of mine convinced me to use when we went to play blackjack in Atlantic City in college: His technique was to double your bet each time you lost, so that when you eventually won, you would end up ahead. This, of course, contradicts the fact that the odds of the blackjack table are such that you should lose in the long run.

The fallacy in the logic is that, although you would eventually come out (slightly) ahead if you had an infinite amount of money, the reality is that with any finite amount of money, you will at some point suffer a catastrophic loss. (Alas, in our case, that catastrophic loss came pretty quickly.)

I should add that this is related to some really important problems in probability and statistics and gets to why some mathematicians argue why you should use a Bayesian approach that more directly acknowledges the subjectivity rather than traditional statistics that hides the subjectivity. Here is a simple example: Let’s say a drug company wants to test their drug to show that it is better than the placebo at a 95% confidence level, i.e., they go for the generally accepted standard that there is only a 5% probability that the drug could have done this well by chance. You would think that this is a straightforward objective thing…but herein comes the subtlety: Imagine that the drug company’s strategy if the drug fails this first trial is to do a 2nd trial and then a 3rd trial and so on until they get the desired result. Then, what is the probability that they will be about to show their drug is good? The answer, of course, is that this probability is 1…i.e., they will always be able to show this.

You might say, “Well, sure, just have the drug company tell me how many trials they had to do before they found a positive result and then all is good.” However, it really isn’t quite so simple. Even if they happen to “get lucky” and show the desired result on the 1st trial, if their plan was to do another trial if the first one didn’t work out, then their claimed result at the 95% confidence level for that one trial isn’t really at a 95% confidence level once you consider this plan…I.e., the confidence level is based not only on events that happened but also on events that didn’t happen but could have and that can be a pretty ill-defined concept without knowing what the space of what could have happened is. Traditional statistics hides this ambiguity and subjectivity whereas Bayesian statistics (that I must admit I don’t understand in detail) has that subjectivity put right out into the open.

Okay, I’ve been thinking about this.

I feel the problem has been in thinking of each player rolling as individuals. This isn’t true; a single dice roll applies to a group of players.

And the way new groups are added in tenfold increments means the final group (which has a double six roll) will be the largest one - regardless of what number group that is.

So it’s not true that a player has only a 3% chance of “rolling” a double-six (and a 97% of “rolling” some other number). A player actually has a 90% chance that he will be in the group that has a double-six roll.