Deadly Sixes Game: Explain the apparent probability paradox to me

My simulation gives two kinds of results - if I compute the mean number of losers divided by the mean of (winners+losers), I get 1/36 regardless of the (finite) number of rounds. The mean number of winners for 1-round, 2-round, 3-round and 4-round games are : 0.9718, 10.4356, 102.3750 and 996.1660 while the mean number of losers are 0.0282, 0.2896, 2.9552 and 27.7942.

But if I compute the mean in another way (calculating the ratio of losers to losers+winners, first and averaging that ratio), I get the ratio that increases with the limit on the number of rounds: 0.0282 for one-round, 0.0513 for two rounds, 0.0756 for three, and 0.0986 for four.

I think that the discrepancy we’re seeing between the two ways of calculating is due to a variant of the St. Crispin’s Day paradox. “Everyone knows” that the statements “A or B”, “If A then C”, and “If B then C”, taken all together, imply C… except that, as Shakespeare pointed out, they don’t, if you take A to be “We will win the battle”, B is “we will lose the battle” and C is “it is better to have fewer soldiers”. The problem is a lack of independence: C helps to determine which of A or B you get.

Likewise, in this problem, the number of rounds (within some finite limit) that are actually played and the luck of the players are not independent.

Agreed

Actually, I think the answer is that:

a) The odds that a person who enters the room dies is 1/36
b) A game that goes on until a room loses will have 90% of the total people who entered the room dying,
c) There is no discrepancy between the two because games are not people

It’s just another infinity problem. If you set an upper bound on the amount, say $1000, then if you get an $800 envelope you already know it’s better to hold. That bias counteracts the fact that below half the limit, it’s better to switch. No paradox.

If there is no limit, there’s no way to have a flat distribution of envelope values anyway. If you manage to come up with a non-flat no-limit distribution, there’s also no paradox, since the expected value of a switch will depend on the value.

In a way, this is like a Martingale roulette system. You put your bets on X, and each round when you lose, you increase the bet such that when X finally comes up, you win an arbitrary multiple of the total amount of money you bet.

That multiple is unrelated to the probability of winning any spin, it’s related only to the payoff rate (in terms of maximum multiple) and the amount you increase the bet each round.

Yeah, I already mentioned that. In the variant where the losers pay money instead of dying, it’s exactly a Martingale, for the people running the game.

Yes, this is the answer. I’m not sure why people find it confusing.

You don’t? I mean, it’s VERY confusing. I have an undergraduate degree in math (although I’m sure others in this thread are far more expert) and have been fascinated by probability problems all my life and have spent a TON of time thinking and reading about things like the two envelope problem, and while I think I understand the dice rolling problem pretty well, I sure as hell think it’s confusing.

If we rephrase it slightly so that instead of killing people, the last group just doesn’t win, then the following questions all sound very very similar, but are not identical:
(1) You have just entered the room, the dice are about to be rolled, what’s the probability you will be a winner?
(2) The entire game has not yet started, but you are on the list of players who will get to play, at position (n). What’s the probability you will be a winner/loser/not-get-to-play-because-the-game-ends-first
(3) You on the list to play this game. Then suddenly you wake up in a bed, and people tell you that you got as far as being in step (1), and were overcome by excitement, and fainted dead away, and completely lost your memory of what happened next, but even though you fainted, your play-through still “counted”. What’s the probability you won?
(4) Same as (3), but they give you the additional information that the game was in fact played through to conclusion and has ended. Does that change the probability?

I’m very confident that (1) is 35/36. I’m very confident that (2) is just some math, although I haven’t done it, with no ambiguity. And I’m reasonably certain that (4) is 1/10, although I could be convinced otherwise. (3) is the trickiest one, and may not be meaningfully answerable without additional clarification about what information would or would not have been volunteered, etc.

Let’s approach it from a slightly different angle. At the point when you are in the room, dice about to be rolled, you are certainly playing; someone calls you up on your phone and offers you a side bet… if you win, they will pay you $1000. If you lose, you will pay them $1000. Should you take that bet? Yes, obviously, right? You’re winning $1000 35/36 of the time and coming way out ahead on EV.

But… if you’re a very rich observer watching the game be played, from the outside, with a hefty bankroll, should you offer that bet to all players? Because if they all take it, if the game ends, you will come out WAY ahead. And you don’t even need to worry about having an infinite bankroll, because you can pay the winners with what you win from the losers.

How can that be? How can it make perfect rational logical sense for both parties in a bet like that to make the bet? Answer is, it can’t. But where’s the error in math/assumption? And it’s hidden in the assumption of infinite population, the assumption that the game will always complete, etc.

People who have been discussing Martingales are right. But let me try to make it a bit more clear…

If I say to you “hey, I’m going to the casino tomorrow and am going to do a Martingale, starting with $100 bets, doubling each time, etc… want to go in on me with it, 50/50, sharing wins and losses?”, then clearly you should not take that offer. Martingales do not win, in the long term. If, however, I say to you “hey, yesterday I went to the casino and Martingaled, and the Martingale completed… want to retroactively go in on it 50/50?” then you should say yes, because any completed $100 martingale will have a 100% payoff of $100.

The difference between those two question is approximately the difference between the original two questions in this thread.

To look at it yet one more way, let’s say there is a fixed population of people who are signed up to play the game. Could be 100 people. Or 1,000,000. Or a googolplex. But some fixed finite number. The moment you restate the problem like that, then all contradictions vanish, and the “you know you will play the game” vs “you know you played the game but fainted” vs any other viewpoints apparent-odds-differences go away. Which just shows that the reason there’s an apparent contradiction is blithely saying “there’s an infinite population”. The moment there’s an infinite population, lots of the seemingly obvious math just goes out the window.

I thought so, too. But it turns out that even once you do that, there’s still weirdness there. Try it yourself, for the case of 11 potential players (so, a maximum of two possible rounds): You’ll still find a discrepancy.

Which does mean that I definitely agree with this:

Not sure what you mean by “try it yourself”? What discrepancy?

Calculate the probability that any given player is a winner, and calculate the expected number of winners.

Seems like it works out to me. To make the math easier, assume a D10 and we lose on a 10. You get $1000 on a win, otherwise nothing. 11 people in the pool.

Let’s look at it from a payout perspective.
10% of the time, no one wins
9% of the time, the first person wins, and the rest get nothing (90% * 10%)
81% of the time, everyone wins, and the first person gets double
Total payout is 9%*$1000 + 81%*($1000 + 11*$1000) = $9810

Then, from a player perspective:
9.1% chance of being picked for the first round (1/11)
81.8% chance of being picked in the second round (10/11 * 90%)
9.1% chance of never being called
Expected payout is 9.1%*(90%*$1000 + 90%*90%*$1000) + 81.8%*90%*$1000 = $891.81
Total expected payout for 11 players is then $9810, same as before.

The problem is people are focused too much on the dice rolls and not enough on the grouping.

It’s true that for any group, there’s only a 36-1 chance that they will be the losing group that rolls a double six. But the groups are not equal in size. The game is set up so that the final (losing) group will be approximately nine times bigger than all of the other groups combined. Which means that for any individual playing the game, the odds say that there is a ninety percent chance that they will be in the losing group.

IF the game ends. Which there is no guarantee it will, for any fixed population of potential players.

Which is why I specifically said in the OP that you should assume an infinite supply of people exists.

But that means the final two groups could both be an infinite amount of people. Those two infinities are of the same kind (as opposed to all integers versus all reals), and multiplication breaks down on infinities of a kind. Specifically, 10 times infinity = infinity. So it’s not guaranteed that the last group will have 10 times as many people.

EDIT: Or 9 times, or whatever.

No. There’s an infinite supply of people. But every group contains a finite number.

The expected number of people in the game is infinite, though. There’s no way to even pick out a random player in advance to track how the game plays out for them.

Why does every group contain a finite number?