How Many Zeros in a Row are Possible in Pi?

How many digits of pi do we need? NASA typically uses 16, presumably because that’s the double precision representation.

For example, Voyager 1 probe, which is now in interstellar space, is currently more than 15 billion miles (24 billion km) from Earth. If you wanted to calculate the circumference of a circle with this distance as the radius, the difference between using the first 16 digits of pi compared with hundreds of digits would be less than the width of a little finger, according to NASA.

The maximum practical number of digits required is 38 according to the article, which I’ll round up to 40:

For example, if you wanted to calculate the circumference of a circle that encapsulated the known universe, the radius of that circle would be around 46 billion light-years — the distance light has traveled since the Big Bang when factoring in the expansion of the universe. In this case, you would need 38 decimals of pi to get a value with the same level of accuracy with which we can currently measure the width of an atom, according to NASA.

An atom? What’s that in Plank units? That would be 10^25 according to Joe Blow on the internet at Quora. So I say we should keep 100 digits of pi, just to be safe. Though 70 would be fine.

Citation:

So… what the heck are we doing past, say, 200? :confused:

ETA: The gang in charge of calculating the fundamental constants of the universe uses the quadruple precision representation of pi, or 32 digits. Cite. Also:

Pi computation can be used to test computer precision, but I think this is a symptom of pi-mania rather than a legitimate need for pi. Other numbers could be used just as meaningfully, but we choose to use pi.

Not sure I buy the author’s POV: I mean using something well understood, broadly understood, and epic seems sensible.

Quadruple precision = 34 digits
Octuple precision = 71 digits (though one might ask what is wrong with your algorithm that you really need that many digits…)

If it’s just a test, I don’t think the specific constant matters but the algorithm does: should be exercising vector instructions, multiple cores, NUMA, etc— I don’t even know exactly what, but every CPU and I/O feature available, to get maximum performance. And, of course, you need to check your answer…

I remember back in ancient times (in computing terms) one system I programmed in could do 9 decimal digits of precision. If you’re cutting a cake down to the molecule level that’s good enough.

In Physics, the largest number of digits I ever saw anyone use was 14. They were calculating some value of a hydrogen atom. Maybe they are up to 15-16 digits now.

(One thing that I saw all too often was “advice” along the lines of “If your input data is single precision is use double precision operations to reduce arithmetical loss.” Nope, you’re just wasting time multiplying/adding junk extra bits.)

God, that was a tiresome novel.

Had to study it for GCSE ‘A’ Level in my final year of school. Never read Joyce since.

The problem isn’t the speed. There are digit-generators that grow the number of digits exponentially (e.g., every iteration has twice as many digits as the previous one), which can keep up with big numbers like 10 to the 1000, if run for a reasonable amount of time. The problem is storing all of those digits.

With many programs, the amount of extra time needed by the program is less than the amount of time that would be expended by the programmer determining whether double precision was needed or not. If performance actually matters, then make sure you’re working efficiently (which could mean lower precision, or it could mean restructuring your loops, or it could mean using a different language, or…). If it doesn’t, then just call everything double precision and call it a day.

See also the correspondance at the end of this column:

Very nice. Much better than my quote.

Do you need to store all those digits if they are accurate and the only thing of interest is repetition? I presume that you do need to if the next iteration depends on the last one, which is already mindbogglingly huge. But how many digits could you realistically store? How many digits would you need to?

The quadratically converging algorithms that Chronos mentions require storing all the digits. They depend on every step happening with full precision.

There are single-digit algorithms that you could use, keeping only a running count of the longest strings you’ve encountered, but these techniques are much slower than the quadratic algorithms. So there is a tradeoff.

Some people did compute the 100 quadrillionth (hexadecimal) digit of pi. That’s about 1000x farther out than what pi can been fully computed to.

Fairly sure it was Archimedes who popularized the huge-piles-of-sand meme…

Calculating (and storing the value of) pi to trillions of digits allows researchers to test various hypotheses in number theory related to pi. For example so far pi remains approximately normal.

But that doesn’t actually get them any closer to knowing if pi is truly normal. Heck, if they calculated it to a trillion digits and found that every digit from the billionth digit to the trillionth were all zero, they still wouldn’t know whether pi is normal.

Oh, and

If I could roll a die an infinite number of times, then I could generate an irrational number by rolling a d6 for every binary digit, and writing down a 1 every time I got a 6 on the die, and a 0 for every other die result. That number would be truly random, with an infinite information content, but it would not be normal.

Sure, though I thought it was clear from the context that I was talking about even distributions.

The sequence from your proposal is “random” but is highly compressible. I can divide it up into blocks and assign fewer bits to those strings that contain about 1/6 ones and 5/6 zeroes. Blocks with differing proportions would require longer bit strings, but they’re much less probable and so I get a savings overall. The first trillion bits would require less than a trillion bits to store.

But if the bits are generated by a fair coin flip, you can’t do anything like that.

A number may be normal while failing other statistical tests of randomness, not that anyone has yet found any such bias in the digits of pi, or e, etc., as far as I know.

Just to correct myself a bit–it’s actually the all-zeroes string that’s most probable. Strings with about 1/6 ones are more probable in aggregate, but individually, the fewer ones the more likely. So it would get the fewest bits, and strings with more ones would get progressively more bits. If I chunk into 256-bit blocks, then the all-zeroes string would get 68 bits, while the all-ones string would get 662 bits. As the block size increases (or I use some alternate scheme like arithmetic coding), I only need 0.65 bits to store the digits.

An aside on precision.
Asking how many digits of precision is needed to usefully represent the result is the wrong question.

Numerical instability is a very real issue and still under appreciated in far too many areas. It is entirely possible to get an answer that is just plain wrong if care isn’t taken. Sometimes more precision is the only way to tame calculations. The hard yards have been done in the well known numerical libraries in terms of getting the algorithms right. But even then you only get so far. Performing sensitivity analysis and tests should be part of any numerical code writing. But it is often ignored. I am quite sure there are research results out there that are basically nonsense because of this.

Sadly this stuff doesn’t get taught in many disciplines. I still remember a chemist colleague telling me how he caught a grad student blithely accepting a computational result - and having to explain how negative Eigenvalues were indicative of non-physical results.

Even in computer science little is taught. All the kids want to get rich using LLM AI. This isn’t good.

I took the phrase “algorithmic randomess” to mean in a Kolmogorov sense. The lopsided sequence you mention is quite definitely not random in a Kolmogorov way. (It will compress, for example, fairly well.) YMMV

You have to look a lot further than that. From the same cite I posted up in Post #11, there is a sequence of 12 zeroes starting at position 1,755,524,129,973.

Also, “you can’t possibly imagine”? Argument from incredulity is a fallacy in informal logic.

14 zeroes in a row already show up among the first 10 trillion digits. As far as anyone can tell, the sequence of digits appears not merely normal but random-looking, therefore one should expect to see any pattern with its natural frequency.