if probability = 1; means event must occur ?

i was just trying to teach a friend some probability today… it’s been a while since i last touched probability…

got me wondering… if the probability that an event might occur is 1, does that mean that the event must occur ?

take these examples:

  1. the probability that when a normal die is thrown, it will roll 1 or 2 or 3 or 4 or 5 or 6. The probability of this event is 1, and i know it must occur.

  2. the probability that in 6 throws of a die, i will roll a 6. the probability of rolling a 6, per throw, is 1/6. therefore for 6 throws it’s 1/6 + 1/6 + 1/6 + 1/6 + 1/6 + 1/6. therefore the probability that this event might occur is 1. however, i know that it is not necessary that i get a 6 in any of the 6 throws, even though the probability of the event is 1. this is where i’m kinda confused about the meaning of the probability of an event being 1.

  3. the probability that Opal will get pissed about a list of 2 items.

so can anyone explain to me what the statement:

the probability that an event might occur equals 1

means ? does it mean that the event has to occur ?

or is there a fundamental flaw in my reasoning somewhere ?

Your flaw lies here! You can only add probabilites when the events are mutually exclusive. For instance, you can roll a 2 on your first throw, or you can roll a 3, but you can’t do both. Thus the probability of rolling a 2 or a 3 on your first throw is 1/6 + 1/6 = 1/3. Now, in this example, you can roll a 6 on your first throw, or you can roll a 6 on your second throw. But you can do both! The events are not mutually exclusive. Thus your formula is in error.

The simplest way to do this problem is consider not rolling 6’s. The probability of not rolling a 6 on your first roll is 5/6. Ditto for the other throws. Since these are independent events (eg, what you throw the first time does not affect the probabilities for the second time) you can multiply them. The probability that you do not roll a 6 all six times is:

5/6 × 5/6 × 5/6 × 5/6 × 5/6 × 5/6 = 15625/46656

Now, this is the probability that you don’t roll any 6’s at all. The probability that you roll at least one 6 is the negation of this:

1 - 15625/46656 = 31031/46656 = 66.51%

(Assuming, of course, I did my arthmetic right, an event with probability significantly less than 1.)

This probability is not 1. To get the probability of a composite event, you need to multiply, not add. In this particular case, it’s easier to calculate the probability that you do not get a 6 in six rolls of the dice. Then the event you’re interested in is “no six on the first roll AND no six on the second roll AND no six on the third roll…” etc. This is equal to (5/6)(5/6)(5/6)(5/6)(5/6)*(5/6), or 15625/46656, approximately 33%. The probability of getting at least one six is then approximately 67%.

You can’t add probabilities like that.

You can multiply them, though. But 1/6 * 1/6 * 1/6 * 1/6 * 1/6 * 1/6 would equal the chance of rolling six sixes in a row. For your question, I would work backwards:

  1. Chance of NOT rolling a 6 in one try is 5/6

  2. Chance of NOT rolling a 6 in two tries is 5/6 * 5/6 = 25/36 (logically this makes sense, if you count all the possibilities for rolling two dice.)

  3. Chance of rolling at least one 6 after two tries = 1 - (5/6 * 5/6) = 11/36

Note that 11/36 is less than 2/6

  1. Chance of rolling at least one 6 after three tries = 1 - (5/6 * 5/6 * 5/6) = 125/216

  2. Chance of rolling at least one six after n tries = 1 - (5/6)[sup]n[/sup]

  3. after 6 tries = 0.6651, just a little less than 2/3

  4. After twelve tries = .8878

  5. After twenty-four tries = .9874

I love it when the math geeks get all hot and bothered!

:smiley:

Scooped! phooey :wink:

And I made a math error too.

  1. Chance of rolling at least one 6 after three tries = 1 - (5/6 * 5/6 * 5/6) = 91/216

Probability 1 means the event must occur.

  1. The probability of rolling at least one 6 in six rolls is obviously not 1. It’s 1 - (5/6)^6. (Pure probability)

The average number of 6’s rolled in six rolls is 1 = 1/6 + 1/6 + 1/6 + 1/6 + 1/6 + 1/6. (Probabalistic average)

So you are confusing a pure probability with a probabalistic average. Probabalistic averages are not extremely useful – really they are more confusing than helpful. The most illuminating way to represent the outcome of rolling a die six times and counting the 6’s is a graph with 7 points plotted – the probability of having 0, 1, 2, 3, 4, 5, or 6 6’s rolled after six rolls – all these numbers are between 0 and 1.

When you calculate the probabalistic average you are adding up these seven important values into one number, so you are losing information.

Consider the problem, what is the average number of 6’s rolled in seven rolls? Obviously the answer is 7/6. Here it is clear that the answer has nothing to do with a probability, because probabilities are always between 0 and 1.

Probability usually equals 1 when the event is so defined, as in your first example: when you throw a die, something will come up.

Your second example is screwed up; you can’t just add together the probabilities that way (if you throw the die seven times, is p = 1.17?)

The probability that you won’t throw a six for any throw is 5/6; the probablity that you won’t throw a six in six throws is multiplicative, so that p = (5/6)^6 = 0.335; the probability that you will roll one or more sixes is 1-0.335 = 0.665

And, in general, even having a probability of one that it will occur does not mean it’s guaranteed to occur.

For example, imagine we pick a real number at random from the interval [0,1], such that no number is more likely to be picked than any other number.

(Such a probability function does exist–Lebesgue measure, for example, where the probability of our random number x being in some (Lebesgue measurable) subset A of the reals is equal to the Lebesgue measure of A.)

The probability for any particular number to be picked is zero. However, we know some number b will be picked, so here we have an example of an event with probability zero that actually does happen. Correspondingly, the probability that b will not be picked is one, and that didn’t happen.

I may be playing out of my league here, but doesn’t the probability just approach zero? Or is the reciprocal of some type of infinity, or some such thing? I have a hard time accepting that an event with probability of exactly zero can still happen. After all, isn’t that the very definition of a probability of zero?

No, in Cabbage’s example, the probabilities really are 1 and 0. If there’s a finite sample space, then an event of probability 1 must occur, and therefore an event of probability 0 can’t.

Once you start dealing with infinite sample spaces, you’re dealing with measure theory, and that’s weird, and all kinds of strange stuff can happen. For instance, the odds of picking a rational number from [0, 1] are zero using the Lebesgue measure, and (if you accept the axiom of choice) there is a subset B of [0, 1] such that there is no way of assigning the probability of picking an element of B from [0, 1]. Don’t ask; it’s really weird.

i bow to you. all. not only do you guys make it so darn clear, you even respond in like minutes.

though i still have some confusion over Cabbage’s post…Cabbage, could you respond to CookingWithGas’s post ?

Thank you, Achernar, heresiarch, jawdirk, Nametag, Cabbage, CookingWithGas, even KneadToKnow :wink: on preview, thanks ultrafilter

i wish i had studied in the Straight Dope School of Mathematics with you guys as my teachers…

do keep the discussion going… it’s really interesting…

and i forgot to mention our beloved Chronos… it’s good to see that you’re still roaming the boards…

Perhaps this has nothing to do with math, but I’d argue the probablity of rolling the number 1 through 6 on a single die roll is NOT 1. It is slightly less.

Imagine an uneven surface, or a wall against which the die may “lean”. It’s even conceivable that the die may simply come to rest on an edge, though extremely unlikely.

Yes, but that isn’t math. The mathematical die is really a random variable with a particular distribution over {1, 2, 3, 4, 5, 6}. The physical die is just an imperfect model of that variable.

That’s true, yojimboguy. When doing probability on paper, you assume that the die is “perfect” or “fair”. That is, nothing out of the ordinary like what you’re describing happens, and that all sides are equally likely to come up. In real life, with something as simple as a die, this is a really good approximation. And anyway, if it did come to rest on an edge, what would you do? You’d throw that roll out and redo it.

Cabbage, is it safe to say that your whole thing about events with zero probability occuring is only an issue when dealing with infinite sample spaces?

Yes. I suspect that not all infinite sample spaces have that property.

Lebesgue measure is a generalization of “segment length”. For example, the Lebesgue measure of the interval [2,28] is 28-2 = 26, which is just what you should expect, since that line segment is 26 units long. Since it’s a generalization of segment length, there are some pretty bizarre sets that you can also assign a Lebesgue measure to, but the basic idea is still segment length. Also, as ultrafilter, there are even some really bizarre sets that can’t be assigned Lebesgue measure (assuming the axiom of choice).

Anyway, while the definition of Lebesgue measure does involve limits, the actual Lebesgue measure of a set cannot “approach zero” or anything like that, the Lebesgue measure of a set is just some real number (or infinity), either zero or not.

It makes a lot of intuitive sense that the Lebesgue measure of a single point is just zero, period, since the “length” of a single point is just zero. That is why the with the probability function I gave above, the probability of picking a single point is just zero; not approaching zero, but just zero. And the probability of not picking a particular point is exactly one.

Also, speaking more generally here, it’s important to remember the context of what we’re talking about. On one level, probability is just a certain kind of mathematical function satisfying certain properties. There’s absolutely nothing in the axioms that say a probability of zero means it can’t happen, or a probability of 1 means it’s bound to happen.

What those probabilities mean and how to interpret them depends on the actual model you’re using the probability function for. Earlier I gave the example of an infinite sample space in which probability one does not mean it necessarily happens. What about a probability function with a finite sample space?

Building on yojimbguy’s comment, I see nothing wrong with the following probability function for rolling a single six sided die:

The sample space will be: {1, 2, 3, 4, 5, 6, something else happens}.

I’ll choose to define the probabilities of 1,2,…,6 as being 1/6 each, and the probability of “something else happens” as being zero (and extend it additively to assign the probability of other subsets of the sample space). I’ll set up my model that way because I’m only interested in the behavior of the die working the way it’s intended to. I don’t care about the times the die lands on its edge, or, if the die may be particularly fragile, cracks into two pieces when it lands. Either of these events can certainly happen, however; just because I assigned them probability zero doesn’t prohibit them from happening, it’s just the fact that I’m not interested in them.

So here I have a perfectly valid probability model; it may be a little unconventional, but it certainly satisfies all the probability axioms. It’s even an “accurate” probability model, in that it describes the probabilities of a fair die (working the way it’s intended to). However, the way I’ve chosen to assign the probabilities demonstrates that probability one does not mean “guaranteed to happen,” e.g., the probability that a 1, 2, 3, 4, 5, or 6 is rolled is one, but that doesn’t exclude the possibility that it may land on its edge, or break when it hits the table.

In other words, there doesn’t seem to be any mathematically objective part of probability theory defining what probability one or probability zero means; it seems to be all about how you set up your model and how you interpret it.

no, something with a probability of one need not occur.

if you flip a coin an infinite amount of times. the odds get closer and closer to one that a tails will come up. since you did in an infinite amount of times. the probiblity IS one (a number infinitely close to one IS one… thats the basis of calculus… so its gotta be true)

however. flip it the first time… heads come up. flip it the second time… heads come up… flip it the billionth time still heads. no law of physics ever says it HAS to be tails. every time you flip it, its got no memory of what happened last. so although its magicly unlikely… you could flip it infinitely and have it come up heads every time, even with tails haveing a proboblity of one of comeing up.

owl, your example is that of the probability approaching 1 as a limit; that’s different than being exactly 1.

I do add the minor note, that you have a probability of 1 of getting {1, 2, 3, 4, 5, 6} when you roll a mathematical die. However, that means that IF you roll the die, THEN you will get one of the those outcomes. It does NOT mean that you have to roll the die. Semantics, perhaps, but an important distinction.