No, this is not correct.
It’s a given that you play the game. I’ve never asked about the odds that affect people who aren’t playing the game because their odds are obvious.
If I ask what are the odds of you rolling snake eyes when you roll two dice, I don’t need to know what the odds are you rolling snake eyes if you don’t roll any dice.
No, I have been consistently asking a single question. Other people have wandered off the subject.
My question was setting up a situation and asking what the odds were for a person in that situation. I originally felt there were two different methods of answering that question and the two methods were producing two different answers. I was asking why.
And I’ve solved it. One of the methods wasn’t applicable to the problem. The other method was and the answer produced by that method was the correct answer.
That solution had nothing to do with infinity or people who weren’t playing the game or whether it was before or after the game.
@Little_Nemo, I’ll try to put this delicately, but my patience is wearing thin, as I suspect everyone else’s is. In your drive to argue with everyone, you aren’t seeking to understand the responses we are all giving. You are wrong on this, plain and simple.
If your definition of “playing the game” means entering the room, then you cannot tell us how many people are playing the game before it starts, and you cannot calculate that 90% of the people playing will die. If you’d like to understand why this is, we can probably help. If you just want to argue, then you’re on your own.
Put another way, you have solved the question you were looking to solve (given that a person plays the game, what is the probability that they are in the losing group).
But you also asked a different question in the subject line (explain the paradox) and then argued incorrectly with a lot of people who tried to do that.
I agree you can’t tell how many people are playing the game before it starts. I’ve never questioned that.
But you’re wrong if you think you can’t calculate the probability of the outcome before the game starts. You’re not arguing with me. You’re arguing with John Graunt, Blaise Pascal, Pierre de Fermat, Christiaan Huygens, James Bernoulli, Abraham DeMoivre, and Pierre Laplace.
I do agree there doesn’t seem to be a point in continuing. I’m done here.
I think this is correct, but it’s not immediately obvious. The tricky thing is that both of the probabilities presented in the OP are calculated from finite numbers, so it doesn’t seem like infinity is involved. Yet I don’t think I can come up with a similar situation, that presents the same “paradox” that the OP is seeing, that doesn’t involve an unlimited (i.e. infinite) pool of potential players. (I may be saying the same thing here as what @Thing.Fish said above in Post #100.)
I haven’t said much in this thread lately, because I’ve been trying to think about it more, and I haven’t yet completely gotten my head around it. I’ll say this, though, that this is a devilishly tricky problem, in that it has at least two tricks to it. One of the tricks is what I’ve been saying from the start, that it has that impossible infinity built into it, but I don’t think that’s actually all of it.
The interesting question IMO is based on a group size of X, what’s the probability you will be called into the room.
If we limit the question to people being called into the room (which we can’t know a priori, but we can understand) then it’s always 90%, by definition. As mentioned above, what happens in the room is irrelevant, as long as you increase the group size by x10 each round and the game ends by killing the entire last group. There’s no conflict with the idea that any person in the room has a 3% chance of dying; you could make it a single coin flip and the results would be the exact same.
The probabilities are being calculated on different events; one is a single roll of the dice, the other is a series of dice rolls with specific rules placed on the outcomes.
Now if you want a real probability paradox, I recommend the Two Envelopes Problem
I barely know 8th grade math, but the main trick surely is the multiplier part, which depends on the impossible infinity, right? You could do anything with probabilities if you could guarantee that when an unlikely event happened, the circumstances would be such that they would apply to almost everyone in the “game” overall.
When the 00 comes up, you’re going to have 90% of your chips on it.
I think this is an apt analogy. I’m still working on the algebra, but if I’ve not made a mistake, the expected values of winners and losers are each non-converging series (as the number of rounds increases), so it’s not surprising that the answers for any finite number of rounds is very different than the answer for a hypothetical infinite number of rounds, just as the number of balls in the bin increases with time but becomes zero at infinity
I don’t even understand that one. Let’s imagine that one envelope has $10, and another has $20. (Meaning each envelope has one bill to keep things as identical as possible.)
When randomly picking an envelope, they both have an expected value of $15. Switching does not change that.
I think the two envelopes premise is trying to go for something that is different than the given premise:
I give you a $10 bill. I then present two identical security envelopes, one with a $5 bill, the other with a $20 bill. You can pay that $10 bill I just gave you to take one of the envelopes, or just keep the $10.
The expected value of both envelopes is $12.50, so it makes sense to hand over the sawbuck and take your chances. But then, of course, the paradox evaporates because whatever envelope you chose has an expected value of $12.50, which means you have no incentive to then trade back your envelope for the original $10 bill.
Once you have chosen an envelope, the other has either 2x (if you’ve chosen the lower value) or .5x (if you’ve chosen the higher value), meaning that the OTHER envelope always has an expected payout of 2.5x/2, or 1.25x. It always makes sense to switch, forever, if given the opportunity.
Once you’ve chosen, the other envelope seems to be worth more than yours even though you know there’s a 50% chance it is worth less. The paradox is how can switching always be better for you?
It’s not. Let’s call the envelope you picked A. The paradox is that envelope B has an expected value of (2A + A/2)/2 = 1.25A, just like the problem says.
However, that exact same logic also tells us that envelope A (which we picked) has an expected value of (2B + B/2)/2 = 1.25B.
So do you want to keep your envelope with an expected value of 1.25B, or switch to the envelope with an expected value of 1.25A?
Therefore my answer to the paradox is: Why does the premise assume that 1.25A > 1.25B? That appears to be a flaw in the premise, at least to me.
EDIT: I guess the answer to the quoted question is “because you’re calculating them one at a time, after each switch.” Calculate them both at the same time before your first switch and the incentive to switch disappears.
I suppose you could drop the first calculation, and instead of calculating the value of the envelope not chosen, only calculate the value of the one you did choose.
You picked envelope A, which has an expected value of 1.25B. Do you want to swap it for envelope B? Of course not. This still has the same flawed premise as the original paradox, but because no switching is called for it’s less interesting. But it’s equally (in)valid.
I did a bit more work. If you calculate the expected number of winners and dead for any finite number of rounds, the ratio of winners to dead is 35:1 (I’ll put in the equations tomorrow when I have a bit more time. But if you look only at games that end in deaths, the ratio is 90% dead to 10% winners, and the more rounds you have humans and money to play, the more likely that the games will end in death (again numbers to follow). For the entire population of the Earth you can get only 10 rounds, which is not very likely to end in death (so you better have 8 million billion dollars around for the winners).
Hm, that’s not what I got. I decided to start simple, with a cap of two rounds. Here’s what I had typed up before it started going differently than I expected:
There must be a limit on the number of rounds possible, because people and money are finite. Pick any limit on the number of rounds, and then calculate the probabilities.
For instance, suppose the limit is 2 rounds. There is a 36/1296 chance that the game ends after the first round with a loss (case A), a 35/1296 chance that the game ends after the second round with a loss (case B), and a 1225/1296 chance that the game ends after the second round with a win (case C). Now, let’s pick a random person out of the set of people who played, and see what the odds are that they were one of the winners.
If we’re in case A, then only one person played, they must have been in round 1, and they must have lost.
If we’re in case B, then 11 people played, and there’s a 1 in 11 chance that the person we pick was in round 1 and won, and a 10 in 11 chance that they were in round 2 and lost.
If we’re in case C, then 11 people played, and all of them won. There’s a 1 chance that our randomly-selected player won.
Add this all up, and the probability that our randomly-selected player was a winner is (36/1296)*(0) + (35/1296)*(1/11) + (1225/1296)*(1) = (35/14256) + (13475/14256) = 0.9477…
I think you’re answering a slightly different question than I am, but I’ll have to think a bit more. Thanks
I get the expected number of winners in a two-round game as
35/36(1)+10(35/36)(1/36)) = (35/36) (1+10(35/36)) =(35/36) 386/36 =13510/1296
and the expected number of losers as
1/36(1) +10(35/36)(1/36) = (1/36) (1+10(35/36)) = 1/36(386/36)= 386/1296
But the expected number of winners versus expected number of losers isn’t the same thing as the chance that a random player will win or lose, right?