The type where you throw a die N times and sum the results. For example, two six-sided dice are most likely to roll a seven and least likely to roll a two or a twelve. In general it seems that the more times the die is thrown, the more likely the resulting sum will be close to the mean of possible values. A coin (d2) with heads counted as “1” and tails as “0” when flipped 100 times will most likely be very close to a sum of 50. I don’t think this is a normal “bell curve” distribution since it has a much steeper curve.
I think it’s a bell-curve.
This may also be of help:
The “bell curve” is the layman’s term for a normal distribution. (Note that “normal” in this context has a special statistical meaning.) The graphs on the wiki page will show you that a normal or bell curve distribution can be very peaked or very flat, and still be considered “normal”. The standard deviation of the distribution will determine the exact shape.
You may also find this useful, as it actually talks about coin tosses:
Rather than do a half-baked job of explaining further, I will defer to wiki…
The distribution for rolling a single die is uniform – an equal probability of each number. The distribution for two dice is triangular – you can impose a normal curve on it, but it’s a poor fit and useless besides, as the true probabilities are easier to calculate, being discrete fractions of the 36 combinations. As the number of dice rolled at once grows larger, the distribution becomes closer to the normal distribution, but remains discrete (“stepwise”), and so is never truly described by the Gaussian function.
Oh, so it is a normal distribution; just not the typical “about as high as it is wide” bell curve you usually see. Thanks.
It’s not a normal distribution, but for a large number of dice, the difference is small enough that it’s not worth worrying about.
Another name for “normal distribution” that does not try to imply whether it is normal in a given situation or not is “Gaussian distribution”.
>Oh, so it is a normal distribution; just not the typical “about as high as it is wide” bell curve you usually see.
Since the bell curve is a graph relating together two things that have different units, its height to width ratio is completely arbitrary, depending on how you scale the two axes. You can’t scale them the same, because they show different things.
How wide and high the curve looks is just a matter of how you choose to draw it. If yours looks narrow, just zoom in on the x-axis by an appropriate amount, and it will look as wide as you want it to.
ETA: Ah, didn’t see Napier’s post saying the same thing…
Wouldn’t this be a binomial distribution?
Let’s be clear here since the OP is not an expert. To make UltraFilter’s point more explicit …
“For a large number of dice” is not the same thing as a “large number of throws”
If somebody were to throw 2 dice many times & plot the results, the resulting curve would have a peak in the middle at 7 with fewer 2s and 12s at the ends. But the curve would not be “normal” in the formal sense. It would not be a Gaussian distribution. It wouldn’t even be particularly bell-shaped.
Now try throwing a bucket of 100 dice & plotting the outcome. 100 would be the lowest number, 600 the highest, and middle value would 350. The curve would be smoother than the two-dice case, and would be closer to a “normal”, Gaussian function. Most folks who looked at the plot would see a bell-shape.
Now try throwing a railroad boxcar full of dice. Per CSX rail, intermodal and rail-to-truck transload services - CSX.com, that’s very roughly 6000 cubic feet & if we use 1/2" dice we get about 83 million dice per roll.
So now the minimum roll is 83 million, the max is 498 million and the peak is at 290,500,000. That curve would be pretty damn close to normal / Gaussian.
No. Why do you think it is?
OP, I think the key phrase you’re looking for here is “Expected Value.” This is defined for both discrete probability distributions and continuous probability distributions. I’ll apply this to your two hypothetical cases.
I the dice rolling case, the expected value of a single, fair die is 3.5. This is the sum, from i=1 to 6 of x*p(x_i). Thus, the expected value of two dice is 3.5 + 3.5 = 7, which is the value you intuitively arrived at.
In the coin flip scenario, the expected value is 0.5, calculated in a similar fashion. This is also the value you arrived at by intuition.
See this Wikipedia link for a much more thorough explanation. In particular, look at the “Examples” section. Your two-dice example is worked out as I described above.
Actually, for coins (2-sided dice), it is a binomial distribution, as that’s exactly the definition of a binomial distribution.
I don’t know off-hand whether many-sided dice end up being a binomial distribution precisely, or just something very similar, but it will be the same basic idea.
But the take-away message is that as the number of dice added together gets larger, it becomes more and more like a ‘normal’/Gaussian distribution (except of course a sum-of-dice distribution can only have certain, whole-number values, while the classic Gaussian distribution can be any real number)
According to Wolfram Mathworld (from the people who make Mathematica) it’s just called the uniform sum distribution. The link gives the equations that describe the probability distribution. As others have said, this approximates a Gaussian distribution as N (the number of die throws that are added together) increases.
In fact, this is why the Gaussian distribution gets so much attention: Many things in real life can be approximated as a whole bunch of independent random things of about the same size added together, and whenever you have that, the Gaussian distribution is a good approximation.
If you want to find precisely what Gaussian distribution you have, you need to know the mean and the standard deviation (those two numbers completely specify a Gaussian). The mean is easy: The mean of a single roll of an n-sided die is (n+1)/2, and when you add multiple dice together, you just add the means. To get the standard deviation, we first get something called the variance, which is the square of the standard deviation. A single n-sided die has a variance of (n[sup]2[/sup] - 1)/12. When you add or subtract multiple dice together, you add the variances (not the standard deviations) together to find the total variance. Then, you take the square root of the total variance to find the total standard deviation.
As an example, suppose you’re rolling 12d12 (that is to say, roll a 12-sided die 12 times, and add them together) – You might have to do this in some games to calculate the damage done by the breath of a particularly powerful dragon, for instance. An individual d12 has a mean of 6.5, so 12d12 has a mean of 126.5 = 78. And an individual d12 has a variance of (12[sup]2[/sup] - 1)/12, or 11.916, so 12d12 has a variance of 1211.916 = 143. Taking the square root, we find that the standard deviation is 11.958… So our dragon’s breath does 78 ± 11.958 damage.