# What does anyone do with huge prime numbers?

Some researchers have discovered a prime number that is now the largest known, at 13 million digits in length. Cite. What does one do with a prime number of this size? Does it have real, practical uses in any field? Is it just for bragging rights?

Cryptography. For reasons explained in this post: “Why do cryptography experts get excited about prime numbers?” on MadSci.

I may remember it wrongly, but is it also used for random number generators?

As I understand, a huge prime number is only good for cryptography when you discover it and DON’T announce it - as in, ‘we know that this number is prime and nobody else does.’ That’s when it can be used as the basis of a crypto code.

This one apparently is just prime number bragging rights, from the article I saw. Of course, if they could discover one that big and announce the first for the bragging rights and to win the prize for being the first to discover one after the ‘million digit line’, then they could definitely discover others for crypto.

And I suspect that there’s a happy medium zone of good crypto primes… too low and anybody could figure them out, too high and the keys are inconveniently long and encrypting with them takes too much time.

Aren’t the algorithms also used to test computer speeds?

That’s not true. You’re probably thinking of the RSA algorithm, in which your public key is the product of two large primes. There’s no issue with anyone knowing that those numbers are in fact primes, but you definitely don’t want anybody you don’t trust to know which primes you’ve chosen. However, primes this large are way too big to use in cryptography. The standard is to use 128 bit primes (which are approximately 39 digits long).

The primary use of very large primes is to offer a benchmark for supercomputers. If verifying that p is prime takes 72 hours on the current supercomputer, and your prototype can do it in 60, then that gives you something to crow about.

ultrafilter, that was my gut feeling during the discussion I was having that prompted me to ask the question here. It was my suspicion that primes with so many digits weren’t really all that useful for everyday cryptology, but I had no evidence of that fact. Thanks! Any suggestions as to where I can learn more?

How do they verify something is a prime number? I’m picturing 72 hours of “Is X divisible by 2? Nope. Is X divisible by 3? Nope. Is X divisible by 5? Nope. Is X divisible by 7? Nope. Is X divisible by 11? Nope. Is X divisible by 13? Nope. Is X divisible by 17? Nope. Is X divisible by 19? Nope. Is X divisible by 23? Nope. Is X divisible by 29? Nope. Is X divisible by 31? Nope. Is X divisible by 37? Nope. Is X divisible by 43? Nope . . .”

The issue with crytography, is that both you, and the guy trying to break it are doing a lot of math on it. But since you know your original numbers, and he is randomly guessing, your work is much less. So there is a constantly advancing sweet spot where you can solve the problem and get the answer in milliseconds, and he is looking at thousands of years(number pulled out of my ass, I havn’t calculated it in years.) On the other hand, he could randomly guess it the first try and get your secret info, but the odds are like winning the lottery every day for a year(again out of my ass), but the possibility is always there.

You could use these rediculously large primes and put the time it takes him into arbitrary magnitudes of unlikely to guess it in a usable time frame, but then you yourself have to operate on very large numbers which might take seconds to calculate even with knowing the key. And seconds to finish is unacceptable in computer network communication(unless it is some really ultra secret peice of info).

But back to the constantly advancing bit. As computers get faster more or less by Moore’s law, The sweet spot has to advance as well, or else the chances of a lucky guess by the bad guy get better.

No, that’s pretty much the worst algorithm possible. Modern primality tests are considerably more sophisticated.

That’s pretty much it. But part of the catch is that you aren’t going to hand-code that sequence of divisions. You’re going to want some kind of process that loops through all those smaller prime numbers. So you will either need to discover all the smaller prime numbers on your way up the new monster, or you will need to have previously computed all of them and stored the results in a handy file before you begin your assault on the new monster. So when you read that someone discovered a new, largest prime, you can pretty much bet that they (re)discovered a whole bunch of smaller primes along the way.

Now, there’s some interesting tricks along the way. One of the most basic is to realize that you don’t need to check for dividability by all the primes smaller than your monster number. You only need to check for division by the primes that are smaller than the square root of your monster. (Because, if monster number M it’s dividable by any prime p larger than the square root of M, then M = p*c where c is smaller than the square root of M, and c is either prime or is divisible by a still-smaller prime. Either way, you will have already eliminated that possibility if you make it all the way to the square root.) That alone can speed up the calculation by a tremendous amount.

I don’t disagree with this assessment, but as a conceptual matter of explaining how tests are done, that’s a tad harsh. There are really two things at work here. One is finding a good candidate number and the second is testing that number to see if it is indeed prime. (And it’s often not clear in announcements like the one referred to in the OP whether both stages are being described.)

There’s some very sophisticated approaches to finding good candidates, and many of the techniques described on the page ultrafilter cites are aimed at this problem. The final proof that a number is indeed prime can be sped up by a number of techniques, but, in the end, it still comes down to a bunch of division checks (or some mathematical equivalent). But the more sophisticated technqiues can reduce the number of such checks dramatically.

Here is another way to speed up the search:

I don’t think 128 bit primes are much use. Many algorithms can factor 100 decimal digit primes with ease (I think they mostly use a process based, in some way I don’t understand, on elliptic curves). They want more like 1000 bit primes or even larger. But a prime with 13,000,000 digits would be of no use for that purpose.

Aside from using very large primes to rate computer speeds, they are used to test the correctness of the arithmetic processors in new chips.

Not really; that’s just the result of doing the divisibility checks by 2 and 3.

You’re probably right. I know a bit about cryptography, but am not an expert.

Out of all the possible algorithms that’s a pretty reasonable one. Not the best, by far, but close to being the worst case of many probabilistic methods.

Here’s a much worse one, implementation outlined in Perl for the performance nightmare bonus:

``````

sub isprime(\$)
{
my \$candidate = int(shift);
my \$considered = 0;
my %checked;

while(\$considered < (\$candidate - 3))
{
my \$divisor = int(rand(\$candidate - 3)) + 2;
next if(defined(\$checked{\$divisor}));
\$checked{\$divisor} = \$candidate % \$divisor;
\$considered++;
}
for my \$divisor (keys %checked)
{
return "not a prime (divisible by \$divisor)" if(\$checked{\$divisor} == 0);
}

return "prime";
}

``````

Don’t call anything “worst algorithm” – there’s always going to be worse.

If you modify that a little:

``````
sub isprime(\$)
{
my \$candidate = int(shift);
my \$considered = 0;
my %checked;

while(\$considered < (\$candidate - 3))
{
my \$divisor = int(rand(\$candidate - 3)) + 2;
next if(defined(\$checked{\$divisor}));
\$checked{\$divisor} = \$candidate % \$divisor;
\$considered++;
}

return "prime";
}

``````

You have the Insurance companies algorithm for certifying mortgage risks

I came across some more info about this particular discovery. It was a Mersenne prime - a number of the form 2^N - 1. These numbers have been known for some time to have a much higher probability of being prime than do randomly selected integers.

And this discovery was not, as we had surmised, done on a supercomputer but was done by exploiting unused CPU cycles of a large number of ordinary machines, much like the well known SETI at Home project. So rather than showing off the computational power of a single machine, it really speaks to the power of large-scale distributed computing.

Believe it or not, this is how mathematicians get laid.