How do they calculate Pi?

As far as the boundary goes, a circle is a closed set, meaning that it contains its boundary.

As far as the boundary goes, a circle is a closed set, meaning that it contains its boundary.

What you’re describing here sounds a lot like Cesaro summability, which is an approach I’ve never seen used before. There’s no explanation on mathworld, so I’ll post one when I get home and have references if no one else does.

Here’s a quick perl script to do this method with ten deep averaging. It converges to 14 digits with just 50 terms of the series:


use strict;
my (@sum,@lastsum,$i,$j,$sign);
my $pi = 4*atan2(1,1);

$sign=1;
for($i=0; $i<50; $i++) {
  for($j=0; $j<10; $j++) {  # how many averages deep
    $lastsum[$j] = $sum[$j];
    if($j==0) { $sum[0] += $sign * 4 / (2* $i +1) }
    else { $sum[$j] = ($sum[$j-1] + $lastsum[$j-1]) / 2 }
  }  
  print "$sum[9]
";
  $sign = -$sign;
}
print $pi - $sum[9],"
";

Here’s a quick JavaScript document that does something similar. But I see that I have a different idea of what “ten deep averaging” means than you do.

I didn’t validate the code, so warning it may crash your browser. But I tested it in Netscape, Opera, and IE and it worked.

Achernar, doing it that way might converge a little faster. It took 10 seconds on my PC for 10,000 terms, and gave 9 significant digits. And for each extra digit, it takes 10 times as many terms, so to get 14 digits, it would take about 12 days.

The kind of ten-deep averaging that I’m talking about gives 14 digits in a very small fraction of a second (I don’t know how to time more accurately than one second in perl, so I ran it a bunch of times while timing it with one second resolution, and never saw it indicate more than zero).

And your javascript runs fine in Mozilla Firebird, BTW.

CurtC, I must admit that your averaging technique is opaque to me, but it turns out to be equivalent to an averaging weighted to the binomial coefficients. I have no idea why this is more efficient, but my code has been modified so that it shows both averagings, and it clearly is.

I’ll see if I can explain the averaging technique a little more simply. If you just do the straight series, then the 96th, 97th, 98th, 99th, and 100th numbers are:
3.13117626945498
3.15190165805602
3.13138883754320
3.15169340607112
3.13159290355855

Notice how this goes below, then above, then below the actual value of pi. So if you average two values that are next to each other, it’s much closer. These four numbers are the average of the (96th and 97th), (97th and 98th), (98th and 99th), and (99th and 100th):
3.14153896375550
3.14164524779961
3.14154112180716
3.14164315481483

Notice that these numbers are much closer to pi, but they still bounce above and below. Now let’s average each of the two adjacent terms of those four numbers, you get these three:
3.14159210577755
3.14159318480338
3.14159213831100

These are closer still, but still alternate high and low. If you continuted averaging the last two averages, you continue to get closer and closer to the actual value. And all we’ve done is to take the first 100 terms of the series, then compute a handful of averages, and we’re within 0.000001 of pi. Just a few more averages, and we’re 0.000000000000000001 away. Sorry, I don’t know what you mean by weighted by the binomial coefficients.

Binomial coefficents work like this. For a given N, there are N+1 numbers, and they always add up to 2[sup]N[/sup]. These are called the binomial coefficients. The kth binomial coefficient is denoted kCN, and is given by N! / (k! (N - k)!). For N = 4, the coefficients are:

1 4 6 4 1

In your above example, you’ll find you get the same answer if you take:

1 × 3.13117626945498
4 × 3.15190165805602
6 × 3.13138883754320
4 × 3.15169340607112
1 × 3.13159290355855

and divide the whole thing by 2[sup]4[/sup]. The numbers listed above are “weighted” differently because some are counted more than others. The binomial coefficients for N = 10 are:

1 10 45 120 210 252 210 120 45 10 1

If you’ve ever built a Pascal’s Triangle for fun you may recognize them. I only bring this up because the method of building the Triangle “feels” a lot like your iterative averaging.

These Binomial Coefficients pop up a lot, so it wouldn’t surprise me if it’s also part of the Cesaro summability that ultrafilter mentioned. But I still don’t see why it gives such a good fit. It seems to me like the best bet would be to weight the last term the most, because it’s closest to the right answer. Your method weights the sixth-from-last term the most. It’s very interesting to me, because it’s not intuitive.

Try these methods on e, and see which one converges faster.

Well, summing inverse factorials, they’re both much slower than simply taking the last term. I think this is easy to see why. I wouldn’t expect it to work at all for non-alternating series:

e.html

Sure, they’re both slower. But which one is faster than the other?

Of course, we should all keep in mind that maybe the apparently faster method is only faster for the first 10[sup]googolplex[/sup] terms, after which the apparently slower method overtakes it.

I remember calculating PI on an old Commodore 64 using BASIC. I decided to do the same thing and average the “high” value with the next “low” value. Yes it takes a LONG time to converge.
Okay enough of that story.

So when people are calculating PI how do they know how many digits are correct? I mean sure they can compare it to a known value of PI but what if you are exploring new PI territory? How do you know it is accurate to so many decimal places?

I’d like to know that too. Does anyone know the formula for it that’s infinite? There’s one that I thought I knew but when I use it it always ends in 3.1416. When I was in school the arithmetic teacher was teaching a way to calculate it to where the answer was 3.1415927…ad infinitum (the ratio of a circle’s diameter to its circumference, I think or vice versa). Thanks!

It depends somewhat on the method used to compute pi. But as an example, consider the formula 4-4/3+4/5-4/7+… = pi. Suppose I’ve added up the terms all the way up to -4/399 (this gives 3.13659 or so). The remaining terms are 4/401-4/403+4/405-…, and it’s easy to check (since it’s an alternating series) that the sum of these remaining terms must be bigger than zero and less than 4/401, which in turn is less than 1/100. Since I’ve got 3.13659… so far, and since the stuff left adds up to somewhere between 0 and 0.01, I know that pi must begin with either 3.13 or 3.14; in other words at this point I know for certain that I have two, possibly three, digits correct.

Thanks Orbifold.
Your answer shows 2 things 1)how they know the # of decimals are accurate and 2) how incredibly inefficient that particluar formula is. (Although it certainly is easy to remember and is probably the most famous. I think Gauss discovered that one).

Quoth ultrafilter:

Completely true and completely irrelevant. A circle not only contains its boundary, it is its boundary. But what we’re interested in here is not the circle (the set of points (x,y) such that x[sup]2[/sup] + y[sup]2[/sup] = 1), but the disk. And a disk can be defined to be open or closed.

But it doesn’t matter, because the area of an open disk and a closed disk of the same radius is the same. Which means that, if we were using infinite-precision random numbers, we wouldn’t need to ever consider the case where x[sup]2[/sup] + y[sup]2[/sup] = 1. Of course, we can’t calculate with infinite-precision random numbers, but we can just keep adding more digits until we’re sure we’re either inside or outside the circle.

Something’s been bugging me about this procedure.

The points where x[sup]2[/sub]+y[sup]2[/sub] < 1 will constitute the interior of a perfect circle.

But the points where x[sup]2[/sub]+y[sup]2[/sub] <= 1 will also constitute the interior of a perfect circle, one which is infinitesimally larger than the first.

In theory, these two circles are the same size (since .9999… does equal 1) but the program is dealing with specific integers. Which formula should be used?

Right.

**
Technically, no. They will constitute the union of a circle with its interior, which is not the same as the interior of any larger circle.

However, the two sets will have the same area, as everyone’s noted.

Either formula should be just as accurate as the other, for the purpose of implementation.

I agree that in any realistic program, the difference made will be much much less than the accuracy of the results - Monte Carlo is extremely inefficient - so it doesn’t matter.

But if you’re really worried about it, why not count the points on the circle as 1/2?