No Isabelle I am not. If I had a list of all the prime numbers up to 1024 bits it is still computationally infeasible to try all these trillions (probably much much more) of prime numbers. The security does not come from the fact that there are not good algorithms for finding primes. The security comes from the fact that there are so many possible keys.
If one were concerned solely with the “keys” there is no reason to restrict those keys to primes. There are far more non-prime numbers, ergo more possible “keys” among the non prime population.
Security is not solely a function of the number of possible keys. It also is dependent, in part, on the predictability or deducibility of those keys. Hence, the ability to predict or not predict is in direct correlation to the actual “security”
No, there are actually as many primes as there are non-primes. Did you mean “below a certain integer”?
Furthermore, even if the keys are predictable, determining a private key from the public key may be very hard. And for some algorithms, primes have desirable properties that make them better keys than composites.
The number of primes below 2[sup]1024[/sup] is approximately 2[sup]1024[/sup]/6.9. This is a really huge number. Makes zillions look penny-ante. I consider 2[sup]80[/sup] to be infinity “for all practical purposes”. No way anyone ever will be able to generate that many primes.
One of the key ideas of RSA encryption (used in Pretty Good Privacy) is that multiplying two numbers together is easy, but factoring a number is somewhat hard. Using primes means they can exploit some properties of the Euler Totient Function. (I love being able to mention ETF.)
Other encryption schemes like RSA have been developed based upon the same asymmetry of calculations: Quadratic Residue (taking square roots in a mod field), Discrete Logarithms, etc. But factoring has a much longer history of resisting efforts to “crack,” so more people trust that.
Note that RSA keys of modest length (e.g, 128 bits) have been broken using free cycle time on computers on the Internet, a la SETI. The technique used had nothing to do with brute force test of divisibility. It involved finding numbers with certain properties in much smaller fields.
Really? This strikes me as intuitive. Of course, some threads of late have disabused me of the usefulness of intuition in mathematics. Is any progress being made on this particular theorem? Does anyone have any ideas, even?
That has been an open problem for a long time, and has proved extremely difficult. Call a P[sub]2[/sub] a number which is either prime or the product of two ( not necessarily distinct) primes. Chen has proved that there are infinitely many primes p such that p+2 is a P[sub]2[/sub]. That’s the best result so far.
And it makes it tougher when we use English.
I think jabba is wrong; he has got it confused with the Goldbach conjecture: Every even number is a sum of two numbers one of which is prime and the other is either prime or a product of two primes.
The number of primes below n = 2^1024 is roughly n/710, which is something like 2^1021.
The primes are not the keys. A key is a set of two (large) primes and their product is public. The decoding can be done only if you know the individual factors (at least you hope that is the case) and there is no known feasible way of finding the two factors just knowing the product. The entire theory can be taught in about five minutes to one who knows the basic facts about modular arithmetic, which is why it was so silly to attempt to restrict export of the method. The essential facts were known to Euler 300 years ago.
Of course, someone could discover tomorrow (or could have discovered 20 years and kept secret) either a fast factorization algorithm or a way of decoding without knowing the factors. If I made such a discovery, I might try selling it to the highest bidder.
OT but why is generating large primes so hard? Let’s take any given sequence of every known prime from 2 without any missing primes. Multiply 2, 3, 5, … P together, and add one… viola, you’ve got another prime.
I’m aware that not all primes “in the gaps” between large primes have been found, but this kind of sequential mulitplication gets large very quickly… surely multiplying the entire known unbroken sequence will produce a result larger than what we’ve found so far? If not, then it should be only a matter of expanding this sequence enough (which can’t be that hard…) to find the next largest prime.
That number could be nonprime itself though. Then you’re pretty much back to square one.
dre2xl, that method doesn’t work, unfortunately:
23571113 + 1 = 59509, not prime.
That number could be nonprime itself though. Then you’re pretty much back to square one.
dre2xl, that method doesn’t work, unfortunately:
23571113 + 1 = 59509, not prime.
Beauty, Cabbage simulposted simulpostings. I don’t think I’ve ever seen that before.
Yeah, that’s a first for me, too.
Yes, there is a pattern. Ask Mangetout for details
Hari Seldon:
That’s fighting talk
The same sieve methods were used for both results, and one other. It is conjectured that there are infinitely many primes of the form n[sup]2[/sup] + 1. Using sieve methods, Chen has proved that there are infinitely many P[sub]2[/sub] of this form.
Ok, I stand corrected. I was unaware of this result. Sorry.
I’ve seen this topic somewhere before.
Never knew there was so much interest in primes.
Ah, thanks… I was thinking of Euclid’s proof of infinite number of primes. Thanks.