Math: N trials of 1/N

As a Geek of Many Colors in grade school, I was shocked to realize that most adults seemed to think that N independent trials of probability 1/N would give you a reasonable assurance of achieving any possible state. They didn’t say it ina so many words, but it was a common intuition – after all, N * (1/N) = 1, right? When my age was in the single digits, I spent many a painful hour traying to explain the very practical problems with this assumption to adults, which did nothing to bolster my already shaky opinion of grown-ups

Some math principles become fixtures or recurring themes in your intellectual lives, and this was one of mine, so I was horrified to realize this evening that I’d forgotten the elegant formulation of the the outcome of N trial of p=1/N that so many of us worked out as bored math students. I’m sure it has a well known name, but since I’d worked it out for myself, the name never stuck in my head – making it rather hard to look up. (I did try!)

I found myself having to explain the principle to someone this evening (for the first time in many years, thank god), but the central equation had gone AWOL from my skulll, so I had to talk around it quite abit. The closest I could come to a general solution was the Generalized Birthday Problem (X items randomly placed in Y bins), but that equation isn’t what math-disdainers consider either simple or elegant.

This isn’t homework (if it were, I’d be able to better decribe the equation I’m seeking), it’s just an excruciating senior moment. Can anyone recognize the equation I’m talking about and refresh my memory? [Even if you get the wrong one, it’ll probably remind me of some other useful escaped tidbit I should review]

The number of successes has a binomial distribution, with the restriction that np=1. The chance of zero successes is simply (1-1/N)^N. For large N the distribution approaches a Poisson, and the chance of zero successes approaches exp(-1), or approximately 0.368. I won’t write out the distributions; web searches for “binomial distribution” and “Poisson distribution” should get you them.

The probability of k successes in N trials with probability 1/N of success is N!/(k!*(N - k)!) * (1/N)[sup]k[/sup] * (1 - 1/N)[sup]N - k[/sup].

Josh_dePlume:
I do recall my basic probability, and the Poisson approach is another simple useful tool that I find myself constantly explaining to people. It’s one of those topics that only gets passing mention in a couple of courses before college, but it struck me as so elegant and simple that I’m amazed so few people retain it.

You did, however put your finger in what confused me. I recalled the “bottom line” that the limit for large N was 1/e, but for some reason (the lateness of the hour and a couple of glasses of wine may have factored into it) I decided the equation must involve logs. In the light of day, it’s obvious that e actually entered the equation through limits, and the log (I still vaguely recall that the equation I wanted was a fairly simple log equation) would only appear in the final stage

ultrafilter:
That’s a good response for me (and in fact is closely related to the equation of the Generalized Birthday Problem I mentioned in my OP), but it’s a little hairy to explain to someone who’d have the misconception in the first place. To apply it to the specific question at hand, I’d have to either sum it from 1 to N (which is just simple algebra, but a fair amount of it, and would definitiely lose the interest of the few people who’d stick around after I wrote that equation down in the first place) or solve it for k=0 (which involves dividing by zero)

To all:
The question is still open, in my eyes (again, my apologies for being so vague) I’m sure lots of you out there have derived the summarizing equation I’m thinking of, though perhaps, like me, you simply forgot it over the years.

After fiddling with the math for a bit, I’m actually beginning to wonder if my original fairly simple log equation wasn’t a slightly inaccurate approximation or the result of a modest error (though I was probably more meticulous about math as a child than I am today-- remember those boring days in school when we had nothing but vast expanses of time to kill, a pencil and a blank sheet of paper?)

Though the math snob in me would be slightly disappointed if it were only an approximation, it’d probably be equally valuable to me in real life.

The hunt goes on!

I’m not totally sure that this is what you mean, but . . .

The adults mistake is in adding the chance on each individual attempt together, which of course over counts because it’s possible to succeed on multiple attempts. What they should do is take the chance of failing on every attempt (1-1/N) ^ N, and subtract that from 1 to get the chance of succeeding on at least one attempt.

Thus, the total chance of success is:
1- (1-1/N)^N

If we multiply (1-1/N)^N by (1+1/N)^N, we get (1-1/N[sup]2[/sup])^N, whose limit (as N -> Infinity) is 1.

Thus, the limit of (1-1/N)^N as N-> Infinity is just 1 / lim((1+1/N)^N, N->Infinity)

But lim((1+1/N)^N, N->Infinity) is e = 2.71828…
so lim((1-1/N)^N, N->Infinity) is 1/e

Thus, our total probability of success goes to 1 - 1/e as N goes to infinity,
and our total probability of failure goes to 1/e.

Thank you for the summary. I should have provided it myself. Your explanation might enlighten those who still labor under the seductive misconception.

For those who are interested, the method of multiplying the individual chances of several events NOT happening, then subtracting the product from 100% to find the chances that ANY of the events will happen is the Poisson method mentioned above. I feel bad for repeating an eponym (named for mathematician Siméon Denis Poisson; not the French word for “fish”) without explaining for readers.

BTW, I think I found the equation I was looking for. The real problem was that I couldn’t remember exactly what the equation was for. I only remembered that it was useful for explanations. I apologize if it turns out to be a letdown. As I suspected, any one of us could have figured it out, if I could have described it.

Resolution
The real problem with applying the misconception in real life is that we OFTEN need to guess how many attempts will almost certainly assure us a success, because an absolute certainly of success with independent trials is impossible. (e.g. if you have a 90% chance of success per attempt, five tries will give you a 99.999% chance of success, but you’ll never reach 100%)

When you’re dealing with relatively small chances of success, like 1/N, where N is any positive integer (except 1: 1/1 is a guaranteed success in one try, which is less common in Real Life than we might like), it takes a surprising number of tries to reach anycommon “fair certainty” threshold. We’re increasingly demanding very tight guarantees of success from our medical and other systems – 95% isn’t very good when a technology will be used millions of times.

Using the Poisson method so thoughtfully outlined by tim314, we know that if:

p is the chance of success in one trial;
T is the Threshold percentage for ultimate success; and
n is the required number of trials

(1-p)^n = 1-T
(multiply the chances of failure together n times to find the probability of N consecutive failures)

This can be rearranged for the equation I wanted:

ln(1-T)
---------- = n
ln(1-p)

If you had a bingo cage (75 balls) and you needed to fish out #13, you’d have a >50% chance of success in 38 tries, and 100% in 75 tries, if you exercised common sense, and didn’t throw the old balls back in.

However, when dealing with independent trials (where the chances of pulling the #13 ball are the same, regardless of what you pulled out before), you’d need 52 tries to have a 50% chance of success, and it would take a whopping 224 tries to achieve a 95% chance of success. A 99% chance of success requires 344 tries.

That kind of result tends to get people screaming “But there are only 75 balls!”

They get even angrier if you try to explain that it’d take 26 tries to guarantee a 99% chance of rolling a “6” on a standard die. It’s pretty easy to roll a “6” in one try (16.66%), but it’s really hard to GUARANTEE a “6”

So we’re rolling a die, and we want to know the odds of six coming up at least once in 26 rolls, we say

  1. On each roll, the odds of a six not coming up is 5/6, or 0.833.
  2. We roll 26 times, or (5/6)^26, which is about 0.0087
  3. subtract this from 1 to find the percentage, which is 0.9913

That is the probability of rolling at least one six in 26 trials?

What you’re talking about is a geometric distribution.

This conversation actually happened last summer.

Me: Why are you buying two Cokes?

Friend: The bottles say ‘1 in 6 wins free Coke’. The last four I bought didn’t win.

Me: :dubious: …um, are you joking? You do know that makes no sense, yes?

Friend: Well, I mean, I know it’s not 100 percent that I’ll win, but it has to be
pretty close, right?

Me: Do you actually want me to answer that? Because the answer is no. Trust me on this.

Friend: Why?

Me: [explains meaning of “1 in 6” probability taken over multiple tries, runs over “past results increase probability of future events” fallacy]

Friend: :confused: Well one of these should win.

Me: :smack:

I think if I had one wish I would relieve everyone in the world of this confusion.

If I had two wishes, eh, there’s always world hunger.