Settle an argument, a 1 in 100 chance of something happening, happens 30 times. Is it basically around a 30% chance of occuring?

It doesn’t stop being 50/50, if you look at the individual trials. It’s 50/50 for the second flip, taken in isolation. It’s 50/50 for the third flip, taken in isolation. It’s 50/50 for the 11th flip. It’s when you start combing flips, and talking about something happening a certain number of times out of a certain number (more than one) of flips that things get tricky.

Let’s go back to the OP’s example. Imagine 100 marbles in a bag: 1 red, 99 blue. If you pull out a marble at random, there’s a 1 in 100 chance of getting the red marble.

If you do this 30 times in a row without replacing the marbles you pull out, there is indeed a 30% (30 in 100) chance of getting the red marble. If you pull out n marbles, there’s an n in 100 chance you’ll get the red one. But the probability does not remain 1 in 100 on each individual pick. Once you’ve taken out some of the marbles, that changes the probability of picking the red marble on a particular pick.

Now, suppose you select with replacement: you pull out a marble and then put it back in the bag before picking again. This time, the probability of picking the red marble is 1 in 100 on any one individual pick. But it’s not n in 100 after n picks. Because you keep replacing the blue marbles you’ve been pulling out, that keeps the chance of getting the red marble lower than it was in the without replacement case.

No. We are told in this thread that it is approximately 26%. See Post 25. I agree with the math and the idea that there is no guarantee that with 30 trials you will pick the right marble. I just don’t understand why trial one has a probability of 1/100 but that “degrades” with the second and subsequent trials.

That’s the correct probability if you replace the marble after each pick. It’s important to distinguish between choosing with vs. without replacement.

The answer would work like this, the long way:

First, you have a 99/100 probability of NOT picking the marble.
On the next draw, it’s 98/99 probability of NOT picking the marble.
The next draw it’s 97/98 probability of NOT picking the markble.
etc.

If you multiple these probabilities all together, you get exactly 70% of not picking the marble, meaning 30% chance of picking the marble.

I just fed in (99!/69!) / (100!/70!) into Wolfram Alpha to do the above math, and it exactly equalled 0.7.

I’m sure there must be a cleaner way to work that expression and simplify it, but my math is rusty.

99!/69! is 999897*…70 (everything less than 70 cancels) and 100!/70! is 1009998*…*71

So (99!/69!) / (100!/70!) - 9998… 70 /1009998…71
The numbers from 71 to 99 cancel, leaving 70/100.

Thanks! I knew there was some cancellation involved, but I knew someone would simplify the work for me. :slight_smile: Probably something I could have done/realized with ease back in high school.

Oh, good God, looking at it again, I feel like an idiot for missing that.

The easiest way to deal with that is to calculate the odds of NOT having success, which is the chance of failing on each trial (99%) multiplied by the number of trials.

So…

.99^30 = .7397, or ~ 74%. That’s the probability of failing to draw a single ‘winning’ number The odds of succeeding therefore = 1-.74, or 26%

Actually, I feel foolish, since I did the work the hard way. The easy way is this

(99!/69!) / (100!/70!) = 99!/69! * (70!/100!) = 99!/100! * (70!/69!) = 70/100

A fair coin flip has 50% chance of hitting either outcome. If you throw the coin and get heads 30 times, your odds on the 31st throw are still 50%. Now, if that happened to me in real life, I’d want to check that coin and make sure it’s fair. But the point is that past outcomes have no bearing on future outcomes.

If you were to bet on the outcome of a fair coin toss coming up heads exactly 5 times out of 10 trials, done repeatedly, you would lose money on that bet. But you’d lose more money if you bet on any outcome other than 50%.

Here’s a question I was pondering a few days ago that’s appropriate for this thread.

Back in the day when the Texas lottery first started, it was a simple 50 balls, of which you would choose six numbers. If I understand probability correctly, that would mean the odds are (1/50)(1/49)(1/48)(1/47)(1/46)(1/45). If I extrapolate that up to selecting 49 balls, however, and continue multiplying (1/44)(1/43) and so on, then the result no longer makes sense. Selecting 49 out of 50 seems to me like the equivalent of selecting which ball will be the one that isn’t selected, meaning a 1/50 chance of winning a lottery where 49 balls are picked from a 50 ball pool.

Where have I gone wrong in this line of thinking? As an add on question, in the scenario of picking balls from a pool of 50, which total number of balls selected would result in the lowest probability of getting them all correct? Is it the six balls that the game originally had, or some other number?

I think this Wikipedia article answers your first question, if I’m understanding your question correctly:

This would be correct if order mattered: if you had to guess not just which 6 balls were chosen, but which one was chosen first, which second, etc.

As you correctly reasoned, matching 49 out of 50 (if order doesn’t matter) is equivalent to selecting the 1 out of 50 that doesn’t match. Matching 48 is equivalent to selecting 2. Matching 25 is the hardest: it’s equivalent to selecting the 25 that don’t get chosen.