Well, I don’t know how to say this. You’re wrong. This is math. You don’t get to have opinions about whether you agree with something.
Sure I get to have an opinion. In this case, I can back it up with math.
We’ll have a cutoff of 11 people. What are the odds of dying?
As an individual, it’s \frac{1}{36}, as expected.
Looking globally, we have to do a bit more work. The expected total number of deaths is:
\frac{1}{36} \cdot 1 + \frac{35}{36} \cdot \frac{1}{36} \cdot 10 = \frac{386}{36 \cdot 36}
To get the individual odds, we have to divide by the average number of players. That’s:
\frac{1}{36} \cdot 1 + \frac{35}{36} \cdot 11 = \frac{386}{36}
Divide one by the other and get:
\frac{\frac{386}{36 \cdot 36}}{\frac{386}{36}} = \frac{1}{36}
Same result. If there were some group size weighting problem, this result would come out differently and the paradox would still be there. The numbers work out the same if you have a cutoff of 111, 1111, etc. people. Just not infinity.
The question I asked was why is there an apparent difference in the probabilities in the game. The answer to that question is because the dice rolls are applied to groups and not individuals. That answered the question with no need to invoke infinity.
Other people have asked other questions and those may have different answers. Some of those answers may be correct for those questions. But they shouldn’t take a different answer to a different question and then argue that it answers the original question.
Again, there is no difference in the probabilities if there is a finite cutoff. Thus, the fact that it’s taken to infinity is the source of the apparent paradox.
The different group sizes is not an issue at all as long as you are careful. If you have a group of 1000 people, it doesn’t matter if you roll the die once for the whole group, twice for two subgroups of 500, twice for one subgroup of 1 and the other 999, or otherwise. It’s totally irrelevant. The odds for an individual end up exactly the same.
Let me try to tighten up the argument in part 2 of the OP to better illustrate the error.
Assume we have an a_n that is the individual odds of being in that that case, and a b_n that is the probability of dying in that case. The total count N may be infinite or not. The total odds of dying is then:
P = a_1 b_1 + a_2 b_2 + a_3 b_3 + ...
So far so good. We know from basic probability that:
a_1 + a_2 + a_3 + ... = 1
Also fine. And finally, the claim that:
b_n >= 0.9 for all n in N
Again, uncontroversial. We argue from that that P >= 0.9, since it is a weighted sum of numbers that are each >=0.9.
And as it happens, this argument is true a lot of the time! For instance, suppose we have:
a_n = \frac{1}{2^n}
It’s then perfectly reasonable to conclude that our first sum is also >=0.9.
But it falls apart when the pool size diverges to infinity. Because we then have:
a_n = 0
If I’m an individual in an infinite pool, then my odds of being selected for the first game are zero. They’re zero for all subsequent games as well. We know that, in principle, we have \sum{a_n} = 1, but for any finite n the odds of being selected are zero. So you can’t make a similar conclusion as before.
You avoid the problem if you cap N. Then, we don’t get into the weird state where a_n=0 is true for all n but they still sum to 1. At least one of the a_n must be greater than 1 if the total count N is finite.
I have come up with a new game: Deadly Bob.
There is a group of 1000 people – 1 person is named Jim and 999 people are named Bob. They all put their (first) names into a hat and whoever’s name is drawn out is killed. Names are drawn out until 999 people have been killed.
What is Jim’s chance of dying? 99.9% of the players die in every game, right? (Hint: that fact has nothing to do with Jim’s chance of dying.)
I’m not completely clear on the rules of Deadly Bob.
Everyone puts their name into the (presumably very large) hat. So there are 999 tags with Bob on them and one tag with Jim on it.
If the first tag that’s pulled out has Bob on it, does that mean everyone named Bob is killed or just the specific Bob that wrote his name on that specific tag?
Assuming everyone named is killed when their name appears, what happens in the unlikely event that the Jim tag is pulled out in the first round? Jim gets killed obviously. But what happens next? The next tag is guaranteed to be a Bob tag. Does that mean you kill all the Bobs, in which case you’ve killed all 1000 players rather than just the required 999? Or do you stop by some undetermined means when there’s one remaining Bob?
I feel we have reached an impasse.
As a couple posts above suggested, modify the game:
Instead of the regular dice, roll a single 100-sided die. If it’s 91 or greater, the players live and get the prize and the pool expands to 10x for the next round. If it’s 90 or less, they are all executed.
90% chance of death in both cases now, right? No more paradox, right? Think on how that works, which is the basis of the impasse.
I still don’t get the paradox. Anyone entering the room has a 35/36 chance of winning. It doesn’t matter if it’s the first roll or the nth.
Once the game is complete, if you look at a list of all the players then for any random person on the list there is about a 90% chance of them having lost.
These are two different things.
The apparent paradox is that they don’t SEEM like two different things.
“Hey, do you want to play a game? The way the game works, is that the only thing that determines whether or not you win the game is a die roll. You lose on double sixes and win on anything else. That is literally the ONLY thing that determines if you win or lose”
“Hey, do you want to play a game? The game is entirely random, neither you nor any other player in the game has any decision making input whatsoever, and 9/10 players who play the game lose”
How can those both be the same game? That’s the apparent paradox, and I think the answer is much more subtle than just “counting by dice” vs “counting by groups” or anything like that… I think it can only be resolved by realizing that probabilities get wonky when infinities are involved.
You don’t see the paradox between having a 97% chance of winning and a 90% chance of losing a game? When it’s the same game?
You say they’re two different things but how do you feel there’s a separation between winning and losing a game? It’s only one game and there’s only two mutually exclusive outcomes. You either win or lose; you always do one and you never do both. That means your chances of winning and losing should always add up to exactly 100%. Not 187%.
No, that really is the answer. It has nothing to do with infinity or probabilities getting wonky.
I can’t emphasis this enough. The answer was posted the second day of this thread, back in April. And it’s math; once the answer was found, it’s the only possible answer. There are no alternative answers in math. So I don’t understand why people are still arguing about this three months later.
But those two probabilities are measuring different things, so there’s no reason to expect that they should add up to 100%.
The “97% chance of winning” is the probability that, if you enter the room, then you will win.
The “90% chance of losing” is the probability that, if you are chosen at random from all the people who play the game, then you will be one of the losers.
You can look up the projected probability, at this point in the season, that your favorite baseball team will win the World Series.. For the Atlanta Braves, that currently stands at 24.6%. But there are 30 MLB teams, and 29 of them will not win the World Series, so the probability of losing is 29/30 = 96.7%. Is it a paradox that 24.6% and 96.7% don’t add up to 100%?
There is only a 90% chance of losing if there are infinite players. Then, the final game will end in loss, and 90% of all players will be losers.
(Even then , i ponder if the tending to zero chance of the game lasting forever and the winners tending towards infinity somewhow counteracts this and the chance of winning is still 35/36)
If there is a finite amount of players, the last game that could be played has a 35/36 chance of winning.
It doesnt matter how unlikely that last game is to happen, thats countered by the humungous amount of winners it could add.
Yes, everyone named Bob is killed. So, like in the Deadly Sixes game, the last round has a huge death toll. But whether the last round has a huge death toll or a small death toll doesn’t matter to Jim.
Do you claim there’s a finite number of players such that a player has a 90% chance of losing the game, should they choose to play, and assuming they are actually among players chosen to play? What finite number?
My assertion is:
(1) For any finite player pool, a player who plays the game will have 35/36 of winning. Period.
(2) For an infinite pool of players, there is a very convincing-seeming argument that makes it very much sound like a player who plays the game will have 9/10 chance of losing, leading to an apparent paradox.
(3) But the problem with (2) is that the convincing-seeming argument doesn’t actually work with infinities involved.
Possibly because it’s not in fact that simple?
I have no idea what you are saying here, but I’m pretty sure it’s wrong.
There is only one probability. What’s the probability that, if you play the game, you will win. And, for any finite pool of players, it is 35/36. And it’s 35/36 for all potential players.
Suppose we have 1,111,111 potential players. And we run the entire game, from start to finish, a gajillion times. But of course we only have a finite pool of players, so if no one has lost by the 7th round, the game will end with 1,111,111 winners and zero losers. But it could end on the third round, with 11 winners and 100 losers. It could end on the fifth round, with 1,111 winners and 10,000 losers, etc.
Then over all gajillion runs, we add up the number of winners and the number of losers, and keep a running total. There will be, on average, 35 times more winners than losers. And in fact, this will be true of any game of the form “choose some people at random, of any group size, then roll two dice, and if it’s 66 they lose, otherwise they win; and then something else happens to determine the next group of players”. No amount of futzing with group sizes will ever change that… still assuming there’s a fixed finite population of potential players.
Yes, the last round is the key. With any finite pool, if you hit the limit, you have a game with a large number of winners and no losers. And due to the scaling in game size, it totally overcomes the fact that in all games that ended before hitting the limit, almost everyone died. In the end, the numbers come out to be the expected 1/36.
So of course the numbers will come out wrong if you only count those games that ended before you reach the limit. And those are the only ones that it’s possible to count in the infinite version, so the numbers come out wrong there, too.
So, just for fun, I threw together an Excel spreadsheet that does this problem for 20 iterations of the game. I ran that 30,000 times and the number of winners divided by the number of players is indeed very close to 35/36, as expected from the math. Math works!
So, I agree with others here that the key to this seeming paradox is that there are infinite players to choose from. If you stop at any finite number (even 10^19, in my 20 game sequence), you end up with no losses many times, exactly the right number of times so that your chances of winning are 35/36.
Infinity is the key to this paradox.