I’m not under any illusions that anything except a quantum event can create a truly random key no matter how many embedded algorithms of pseudo-random number generators are embedded within another. But, if you use a seed with one algorithm to create a seed for a different algorithm and do this 100 times, each time with a different formula for producing a seed number then doesn’t this make for a sort of “100 bit encryption” of the seed number itself?
On another note, given your best guess at the processing power of the NSA’s fastest computers how long would it take to try all combinations of a 128 bit key?
I seem to recall hearing of a university project where they got people from all over the world to each run a part of this or a similar problem on their PCs, and did in fact manage to do it in a not unreasonable amount of time.
I have no idea what the NSAs current computing capabilities are. However, the NSA people, while very, very good, are human, and their computer technology, while doubtless cutting-edge, must be similar to the very latest commercial gear.
In most cases, the analysts try to find weaknesses in the cipher algorithm that eliminate significant groups of possible keys from the search, thus reducing the search to managable dimensions.
I seem to recall that some researchers have claimed to do this for the DES standard recently. Note that this does not mean that DES is no longer adequately secure for most purposes – it just isn’t quite as secure as previously asserted.
Superencipherment (encrypting already encrypted text) can be shown to be mathematically equivalent to encipering the original text with a single (but different) cipher (this is sort of like adding vectors – the sum of two vectors can always be represented with a single vector).
Unless the second encipherment is chosen with extreme care, superencipherment can actually weaken the system. One of the keys that allowed the Allies to break the Ultra cipher was the fact that the signal was through the same matrix a second time. This introduced a pattern that could be recognized.
Once again, any crypto experts are welcome to correct or disagree with me, but much as I enjoyed the book, I have to call BS on this. I can’t see how this slight deviation from perfect randomness provides enough information to crack a OTP. Remember, unlike many ciphers, knowing what one letter of the plaintext is gives absolutely zero help in cracking another letter of the ciphertext. If the OTP letter repeated, or otherwise formed a predictable pattern, then it’s toast, but I don’t see the described deviations from randomness as sufficient.
Of course in all of his books, Stephenson takes an interesting idea out to its theoretical limit, not matter what real-world impossibilities there are in its way, and actually does so less in the Cryptonomicon.
And for generating (almost) random numbers from electronics, don’t they just amplify an intentionally noisy circuit? Of course, the noise is affected by the EM environment (e.g. nearby powerful radio stations, the hair dryer in the next room, etc.) but as long as you take minimal steps in your algorythm to cancel the predictable 60Hz power hum, shouldn’t you get something at a useful level of randomness?
distributed.net has a system that uses a client running on thousands of computers to work on the same problem. They teamed up with a specially-designed DES cracking machine to brute-force a 56-bit DES key in about 22 hours. I’ll leave the math as an excercise for the reader (because I’m rushed, not pedantic) but this is still a long way from being able to brute force a 128-bit key in reasonable time. It was mostly a demonstration that DES was inadequare, arguing for the AES replacement.
It’t been along time since I read anything on cryptography so I need a bit of a refresher. Since no one is actually typing in a 128 letter/number key somewhere, why not just make a 1280 bit key?
Key length is generally chosen to balance usability with security. For symmetric algorithms like DES and AES, 128-bit keys are very secure and, in fact, people are typing them in. Of course, if you’re talking about something like an SSL transaction, then the symmetric key is being generated automatically for a session, but for things like encrypted file systems, data vaults, etc. a person will type in their password, and this is used to create the symmetric key (sometimes directly, sometimes by hashing, etc.). Note that you only need ~20 typed characters to get 128 bits, though more is better if you’re using a hash.
Public-key systems use much longer keys in order to be secure. This is because they’re based on a different kind of math, and the key length to provide adequate security against brute-force attacks is different than it is with symmetric algorithms. A 1280-bit key length for most public-key systems would be relatively weak, and many apps like PGP provide options to use 2048, 4096 or even longer keys. In these cases, the keys are not typed in by the user but stored in files, with the secret key encrypted with a symmetric algorithm using an appropriate length passphrase which the user types in to gain access to the key.
I couldnt find an intel page with specs about their RNG but if you do a google for Random Number Generator Intel Motherboard, you will find plenty of reviews mentioning it.
Of course, if your 128-bit key is made up of typed characters, you have just excluded the vast majority of the possible keys, as few people will make use of the entire set of ascii characters, including escape sequences, foreign alphabetic characters and symbols.
This is just the kind of thing that codebreakers live to find, as it makes even a well-crafted cipher breakable.
Absolutely, but in the real world you have to balance security with usability. If you can get everyone to carry a hardware key or smartcard with a strong key, that’s great, but would you use this kind of storage for all your security purposes? A lot of systems have to rely on keyboard input or they won’t be used, and it’s better to have a less-than-perfect system that gets used than a perfect system that gets circumvented by frustrated or lazy users. Using a key made up of 20-30 characters limited to mixed case alphanumerics and punctuation still gives you a pretty strong key even if it does allow an attacker to shortcut bruce force attacks somewhat. For instance, if you’re a PGP user, how do you handle your secret key passphrase?
Technically, this doesn’t break the cipher, it’s a exploit against a weak implementation. You can “break” any message with a brute force attack, but it doesn’t mean the algorithm itself is weak. Breaking a cipher is typically used to mean finding a weakness in the design of the algorithm (not the implementation) that allows attacks easier than brute force. For instance, the distributed.net project to crack a DES-encrypted message doesn’t demonstrate a flaw in DES. DES still does exactly what it was advertised to do. It’s just doesn’t have keys long enough to stand up to current computing resources. In contrast, the AES selection process had several examples where algorithms themselves were cracked, demonstrating a weakness of the basic design.