I think this was one of the number theory proofs in Euclid, so it’s been known for millennia.
By way of illustration, say you have 2^n - 1, for some n that isn’t prime. In base 2, that number will be written as n 1s in a row. For instance, 2^15 - 1 = 111111111111111. But since 15 is composite, I can group those 1s into groups of equal size: Say, 111 111 111 111 111. Which means that 2^15 - 1 = 1001001001*111.
Here is something to ponder: suppose I write down a number. You can check whether it is prime or composite in polynomial time (in terms of the number of digits). But if you want to factor the number [a composite one]… well, how are you going to do it fast? It does not seem so easy. That is the point.
Actually, it’s not at all strange to skip primes. Euclid’s method for proving that there is no largest prime implies a method of finding new primes that may very well skip primes.
Euclid’s method - take a list of primes (say 2, 3, 5), multiply them together (you get 30) and add 1. The result, 31, is guaranteed to be either a prime (it is!) or the product of primes not on your previous list, but it’s not guaranteed to be the next largest prime at all (in this case, you skipped 7, 11, 13, 17, 19, 23 and 29)
The above discussions hint at, but do not say explicitly, this: Yes, if p is composite in 2^p-1, then 2^p-1 must be composite. BUT if p is prime, then 2^p-1 MIGHT or MIGHT NOT be prime. In that case, it remains a candidate but must be further tested.
50-some years ago, a friend challenged me to prove this. It took me about 90 seconds to come up with exactly this proof.
I recently discovered the largest prime number (no, it’s not Mersenne) which, unfortunately, the margin of this post is too narrow to contain.
That’s not quite right; you need “only” one of the prime factors. The other would be found by dividing your number (which is the product of the two prime factors) by the first one you found. Fact remains that the problem is, as far as is known, way way beyond the capacity of any known non-quantuim computer. And you would need a quantum computers orders of magnitude larger than any known or contemplated one to do it.
Curiously, it is relatively easy to find large numbers, say of 512 bits, that are virtually certain to be prime using only moderately large computers. Actual certainty is much harder but for the encryption questions, pseudoprimes are likely just as good. While they would be easier to factor, no one would try.
There are very special methods for finding if 2^p-1 is prime. But they are no help in finding any smaller primes. So the largest known prime is always a Mersenne prime and probably always will be.
Ah, of course.
The “virtually certain” part is a bit misleading, there. It’s not hard to use these tests to get such a high degree of certainty that it’s more likely that every mathematician in history who’s ever examined that theorem was hallucinating than it is that the theorem is right but that the “pseudoprime” isn’t actually prime. And once you get to the point where you’re considering the possibility that every mathematician in history was hallucinating, well, there’s no defense against that possibility at all.
Of course, there are also “absolute” primality tests (that is, absolute if the mathematicians weren’t all hallucinating) that can be done in an achievable amount of time. But even though the amount of time is achievable, the “almost certain” tests are so much more efficient that nobody ever bothers.
Agreed. Non-prime pseudoprimes are as rare as you say.
2^6 = 64. so… (2^6)-1=63 = 7x9 and 6 is composite.
2^7 = 128 and (2^7)-1 = 127 which is prime like 7.
To test for smaller primes, you only need to check (prime) numbers less than the square root of the number being checked. obviously, when you get into the real of tens of billions such numbers, this is less feasible.
I tried some random primes of 512, 1024, and 2048 bits on a computer, and the “absolute” test takes about 1000 times longer than the pseudo-primality test.
Out of curiosity, what were those times? Are we talking a millisecond for the quick tests and a second for the slow ones? A microsecond and a millisecond? A second and 12 minutes?
I am so sorry: I did not save the times, but it is trivial to re-run the code. For reference, this is on an Intel i7 and using the Sage computer algebra system (so not necessarily the fastest possible code known to man, but it will tell us the order of magnitude).
(I did some averaging here; not all the digits are significant, etc.)
| # of bits | quick test | slow test |
|---|---|---|
| 512 | 313 μs | 289 ms |
| 1024 | 1.43 ms | 1.7 s |
| 2048 | 8.16 ms | 11.5 s |
The “slow test” is a Pocklington test in this case, I think.
Such a list would be impossibly, absurdly large.
This old illustration looks at only 64 bits, it reaches 1,600 times the global production of wheat.
@Chronos answered this in Post #4 (though there may have been a typo?) — there are about 1.9 × 10151 512-bit primes, so it’s even worse than a mere chessboard.
I’m not too worried about the factor of 2 difference between our answers, but my figure was just \frac{2^{512}}{\ln(2^{512})}.
Ah, I see my mistake. I was calculating the number of primes of up to 512 digits, not the number of them with digits equal to 512. My base-ten intuition led me awry: In base ten, those would be close to the same, but in base two, half of the numbers with “up to 512 digits” will have fewer digits than that.
@DPRK 's answer was right, not mine (though, as I said, a factor of 2 doesn’t matter much here).