I’ve had this question for a while, and I signed up on here just so I could ask it:
In PGP, why can’t somebody figure out the private key from the public key? Since they have to be linked in order that the private key unencrypt the public key’s encryption, wouldn’t there be a way to derive the private from the public?
Sorry if there’s a really obvious answer to this one, but it’s been bugging me for a while and rudimentary google searches show nothing answering this question save statements without the ‘why’.
There is a way, theoretically, to determine the private key from the public key; unfortunately for the would-be crackers, it usually involves factoring a 1000-digit integer into primes, or some other computationally infeasible task. There are other ways to attack any system of encrypted messaging, but just a brute-force attack on the keys would probably take any current computer millions of years. I’m sure someone 'round these parts who knows more of cryptography can go into greater detail than I can.
The public key that anyone can use to encode is the very long num ber. The private key that only those in the know can use to decode is the two prime factors of the large number.
IIRC, the time span would only be in the tens of thousands of years. The Feds wanted everyone to use some kind of broken variation of the PGP encryption method so that the Feds could read the stuff, but nobody liked that idea, and then someone figured out that the broken system made the encryption very easy for anyone to crack, so the Feds pretty much shut up about the whole thing. At least for awhile.
PGP is an implementation of what’s called asymmetric key encryption. A well-known example (at least one that I could pull up at this late hour) is the RSA algorithm.
The algorithm is dead simple. Simple enough to put on a T-shirt, even. What not so simple is that the large primes p and q are really large - hundreds if not thousands of digits long numbers. It should take so long for someone to try and brute-force their way through all the possible combinations of huge primes that the protected information is no longer valuable by the time they succeed.
FWIW, part of what makes this particular algorithm so good is that it’s so simple. There’s not too many places for it to break, and fewer chances of bad (trivially easy) keys. It’s also publically disclosed, so the cryptographic and mathematic communities have been able to bash away at it looking for weak spots.
I remember I asked a similar question about encryption about 2 years ago.
All I wanted to see was the method for encoding and decoding using really small numbers. The public key would be 15 and the private keys would be 3 and 5. It seems that even with these relatively “easy” numbers, the method is still very complex.
If p and q are 100 digit numbers then pq is going to be something like a 10,000 digit number. As I recall the difficulty (or time) of finding the prime factorization of a number is proportional to the square root of the number, so it is 10 times as difficult to find the factors of pq as it is to find them of p or q.
But if it takes 1000 years to find the factors p and q of p*q, and that’s 10 times as difficult as finding p or q alone, wouldn’t it take 200 years to make sure that p and q are both prime? Since the numbers must be prime and it takes less than 200 years to make a private key, how does this work?
There’s some funny math going on here. Suppose that p and q have 100 digits; then pq has about 200 digits, not 10000. Factoring p by trial division would require trying all numbers with up to the square root of p: i.e., up to ~50 digits, while factoring pq by trial division would require trying all numbers with up to ~100 digits: i.e., about 10[sup]50[/sup] times as many numbers. (There are faster factoring methods, but the results are qualitatively similar.)
Of course, even trying numbers up to 10[sup]50[/sup] (to factor p) is prohibitive. But there are primality-testing algorithms which can decide whether a given number is prime, without finding the factors, in a much shorter time. The most famous simple primality test is the Fermat test: If p is prime and k is not a multiple of p, then k[sup]p-1[/sup] mod p = 1. This modular exponential can be efficiently computed, so if the result ever differs from 1 (for any k you choose) then p is not prime. If the result is always 1, p is likely to be prime; however, some composite numbers (“pseudoprimes”) pass some of these tests, and a few, the Carmichael numbers, pass all of them.
As far as I know, this sort of test is what’s usually used in practice, with the possibility of getting a pseudoprime just ignored.
Am I the only one who always thought these numbers are meaningless. From time to time you’ll hear/read something that says that such and such a password is so stong that it would take the worlds fastest computer 10,000 years to figure it out by brute force. I always want to stand up and reply that that would only be the case if the password was the LAST one tried.
Fine. On average, the brute-search will succeed in half the maximum time, so all you have to do is book the world’s fastest computer for a mere 5,000 years.
I’m not sure about encryption, but I know that when discussing hashing algorithms (which are more commonly used for password authentication), they are referring to the number of attempts it would take before you had a roughly 50% chance overall of getting the correct result. For example, for the SHA-1 algorithm you could determine a collision definitively somewhere on the way to 2^160 attempts, but because of the birthday paradox you would have a better than half chance of finding one after “only” 2^80 attempts.
Of course, you could hit a collision prior to that mark, but the probability drops off exponentially pretty quickly the earlier you go, so half is a pretty good mark.
I’ve explained it before (with some success) like this: if someone gives you the number 35, and tells you that it’s the product of two primes, it’s pretty easy to find them, right? How about 91? It’s 7x13, so it’s not too hard, but you don’t know it immediately. How about 391? As the numbers get bigger, it gets harder and harder, because the only way to factor it is to simply test all the possibilities. With a number hundreds of digits long, it’s so tedious that even a powerful computer can’t do it in its MTBF.
And another point - if a mathematician figured out a shortcut way of factoring large numbers, all our modern public key cryptography schemes would be useless.
As given in the link I supplied above, using the Fermat test for primality has a big theta (log n) running time. Although it should be noted that it is probabilistic; the deterministic limit is big theta (sqrt(n)).