I know I’m severely simplifying it but looking up the math online it seems like it’s basically a 28% chance rounded down, but my friend is arguing with me about gamblers fallacy and how the actual odds are significantly lower. And now he’s arguing with me about how flipping a coin 10 times doesn’t automatically mean 50% of the time the coin will land heads. His entire argument is that instead of 30% chance, I should be calculating it like it’s a 10% chance.
I don’t follow. You’re saying that something has a 1% chance of happening, then you’re asking if there’s a 30% chance of it happening 30 times in a row?
This is certainly true. There’s a small chance (0,09765625%) that you’ll get all heads and there’s an equally small chance that you’ll get all tails. The rest of your OP is incomprehensible.
Well, there’s two options here: the initial “1% chance of happening” assessment was wrong, or you happened to hit the infinitesimally-small chance of having it happen 30 times. It’s possible for something with a 1% chance of happening to happen 30 times in a row (or just 30 times out of 100); it’s just really unlikely.
I think the OP is saying that he thinks that, in 30 occurrences of an event with only a 1% chance of success, there is about a 28% chance of at least one success. Draw a marble from a bag of marbles numbered 1 to 100 thirty times and about 28% of the time you will draw, say, 7 at least once. This is correct.
His friend is using coin tossing as his counter argument, saying that if you toss a coin 10 times you don’t get 5 heads and 5 tails. So there is only a 10% chance of drawing 7 in your 30 draws from the bag of marbles.
I would find a way to select a random number between 1 and 100, say Excel, and let your buddy bet you 5 to 1 odds for each group of 30 picks. He will soon work out that he is wrong.
If you roll a fair hundred-sided die 30 times, then on average, 1 will come up 0.3 times exactly. That’s not quite the same thing as having a 0.3 probability of coming up once, because it’s also possible that it will land on 1 multiple times. When the number of rolls is small compared to the number of sides on the die, this is a small effect, and so the probability will be close to the expected count, but still a bit less.
By contrast, when we’re rolling a two-sided die (i.e., a coin) ten times, the expected number of heads is 5, but clearly, the probability of getting at least one head is not 5; it’s less than 1 (because probabilities are always at most 1). The number of sides on the die is significantly less than the number of rolls, and so the probability is much less than the expected count.
I think the OP is right: if you’re running a set of 30 trials with a 1% chance of success per trial, you’d expect 30 successes per 100 sets. But, since some of those sets will have two successes, the number of sets you’d expect to have at least one success is a little lower than 30 per 100. Without doing the math myself, GreenWyvern’s 26 sets per 100 seems about right.
I have no idea what the OP"s friend is saying or how he came up with 10 sets per 100.
As others have noted, it’s important to be clear about exactly what you’re calculating.
Assuming independent trials, with the same probability of the thing happening on each trial, this is a case of binomial probability, and there are binomial probability calculators online that can do the calculations for you, like this one:
With 30 independent trials, and a 1/100 probability of success on each trial, there’s a 22.4% probability of exactly one occurrence in those 30 trials, and a 26% chance of at least one occurrence.
And yeah, if you flip a coin 10 times, it’s not guaranteed to come up heads exactly 50% of those specific 10 flips. The probability that it will is 24.6%.
The Gambler’s Fallacy basically involves treating independent trials as if they were not independent. Which is arguably what you’re doing if you’re expecting exactly 50% of the coin flips to come up heads: that would make the 10th flip deterministic.
Supposedly, the idea behind your estimate of this probability being around 30 % was a straightforward one, namely to multiply the 1 % chance for each trial by 30. As we have seen, the exact probability is indeed somewhat close to 30, but the closeness of this result to the result you’d get from the “straightforward” calculcation is coincidental. Take, for instance, 50 rather than 30 as a number. If the probability of the coin landing on heads is 1 % and you do 50 flips, then the probability of it landing on heads at least one is 1 - 0.9950 = 39.5 percent, which is quite significantly different from 50 percent. For 70 flips you get 50.5 percent, and for 100 flips, you will, of course, not get 100 percent, but only get 63 percent. So don’t be fooled by what seems to be, at first glance, a straightforward calculation; the maths is more complicaqted than that, and the fact that the straightforward multiplication works for 30 is a coincidence.
I am no mathematician, but it seems perfectly possible to me that if the odds of winning the Powerball were one in 200 million, you could play the lottery a trillion or quadrillion times in a row and most likely never win once. Each time the odds would simply be so astronomically against you as to be tantamount to zero.
If you played the Powerball a trillion times, it would be absolutely astounding if you didn’t win many, many times. Yes, each time the odds are stacked against you, but a trillion plays is a very large number.
Look at it this way: Each week, there are a lot less than a trillion players. And every one of those players has the odds heavily against them. But there are still winners.
@Schnitte, it’s not a coincidence that the naive calculation comes out close. It’s a straightforward mathematical consequence of the fact that the number of trials is small compared to the reciprocal of the probability for each trial. As long as that condition holds true, the approximation is good. And when it’s not true, the approximation isn’t good.
Yes, but that’s because with so many players, a good chunk of Powerball numbers are snapped up such that it is likely that at least one would win. But for John Smith living in Anytown, Tennessee, his chances are astronomically low.
If you were the sole player of the Powerball - you and no one else - and the odds of winning were one in 200 million, you could play an almost-infinite number of times and not win. Each time you played would be a fresh new start, a fresh new try independent of the previous one.
You could, in the sense that it is not impossible. But if the odds of winning are one in 200 million, you could expect to win, on average, once every 200 million times you play.
You seem to be saying, “This number is so small that no matter how large a number you multiply it by, the product is still small.” But nonzero numbers don’t work like that.
And that is a consequence of the fact that (1-p)n ~= 1 - np for small n or p. The next term in the expansion is n(n-1)/2*p2 and will start to dominate if either n or p get too large. But results are good for np<0.1 and ok for np<0.3.
If I’m doing my math right, the odds of NOT winning after 200 million plays with a game with the probability of 1 in 200 million is 36.8% by my calculation. (199 999 999/200 000 000)^200 000 000. You take the odds of an event not happening and then raise it to the power of how many trials you have.
So at 200 million plays, the odds you have won is 63.2%. The 50% point is approximately 140 million. So at 140 million plays, you have about a 50-50 shot. (Actually, very slightly better.)
At one billion plays, the probability you have won is 99.33%
At one trillion plays, the probability you have won is showing up simply as 1 in Wolfram Alpha, but it is ever so slightly less. It’s 1 - 10^-2171.47… .
That’s 1/e, BTW. Also not a coincidence. It’ll be the same for any calculation where the number of trials equals the inverse probability, for a large number of trials.