Quick stat prob Q

If I have a 100 sided die and roll it once a day. How long can I on average avoid rolling a 1. I am not a stat prob whiz and have to explain it to someone else less math saavy than I, so please try and keep it simple. Prefer an answer that does not use much outside the realm of basic algebra in the way of a formula. If I can feed it to MS Excel to do a demonstration over multiple examples, even better.

Thank you,

This may or may not be what you’re looking for, but it’s the best I can do off the top of my head…

Each day, there’s a 99/100 probability that you don’t roll a 1 that day.
If you do this n days in a row, there’s a (99/100)^n probability that no 1s have come up in that time, and hence a 1 - (99/100)^n probability that at least one 1 has appeared.

The first n for which this is > 0.5 is n=69, so you’d have to go 69 days before you’d have a better-than-even chance of seeing a 1.

Let m be the number of times you expect to have to roll to get a 1. If you get a 1 the first time, m = 1. If you don’t get a 1 the first time, you still expect to have to roll m times (as these are independent trials), but you’ll have to add 1 to the final count.

Write this as a conditional expectation and you’ll get m = 1/100 + 99/100 * (m + 1). Solve that to get m = 100. So you expect to have to roll the die 100 times to see a 1.

Ack, the two conflicting answers you contributed are the exact disagreement I am trying to solve. If it has to go to more advanced math thats fine but I would prefer to find a difinitive answer that a couple math saavy dopers can agree on.

Maybe I should clarify in case my understanding is incorrect.

The actual detailed problem is this
Person#1 claims Item A has an equal 1% cance of failure per day and should on average last 100 days.

Person #2 claims that Item A on average will only last 69 days and that the reliability of the part is being overstated by over 50%

This is for part of a game, so wear is not an issue, just the average survival of the part based on random probability.

There were mentions of actual life vs. mathematical expectation so maybe this comes down to a terminology issue.

Well, hold on just a second. Thudlow didn’t say that you’d expect it to wear out in 69 days, just that’s how long you’d have to wait before you have a better than even chance of wear.

You can model the problem as a geometric distribution and get the same expectation. It is the right answer.

The OP asked “How long can I on average avoid rolling a 1”. Ultrafilter has correctly answered that question. You would expect it to take 100 rolls to see the first ‘1’. Expectation is a term from statistics that (under ordinary circumstances) essentially means “on average”.

Thudlow answered the very different question of (essentially) “how many rolls do you have to wait before having a better than 50% chance of seeing at least one ‘1’”. The difference is subtle, but it does make it a different question.

The first1 rather than a 1 is where I made the error in my statement. Since the “usable life” of the object in question would end with that die roll of one and the next 1 would be applied to the replacement part when it came up.

Thank you for clarifying my question for me. That is what I am asking. So it looks like Thudlow hit it right off.

Actually, I’m liking ultrafilter’s answer.

If you had a huge number of these items, and kept track of how long each one lasted, the average of those lifetimes should be 100 days.

I computed the expectation thusly:
The probability that the first 1 appears on the mth roll is (.99)^(m-1)*(.01).
So the expected value of m = the sum of the infinite series [i * (.99)^(i-1)*(.01)] as i goes from 1 to infinity.
Computer calculations suggest that this is, in fact, 100; so I agree with ultrafilter.

To add some geeky stats terms:

Thudlow Boink calculated the mode of the distribution: half the samples will fail before 69 days, half will last longer than that. However, as some samples will last practically forever, this is not the same as the average (mean, expected value), which is what ultrafilter calculated: if you take the lifetimes of a million samples, add them up, and divide by a million, you’ll get 100 days.

Shouldn’t that be the median?

Absolutely.

D’Oh! :smack: :smack: :smack: :smack:

The mode of course would be the most commonly occurring time-to-failure, which would in this case be one day, as 0.01 of the sample would fail on the first day, and 0.99^(n-1)*0.01 on the nth day, which is smaller than 0.01 for all n…