It also requires a really good source of random numbers. Which are a lot harder to come by than one might think.
Really? I wasn’t joking. How can encrypting something 10 times over using different algorithms and passwords make it easier to crack? I don’t think encryption algorithms are suppose to show patterns in the output that make it obvious what you are encrypting. Cite, please.
See chapter 15 (in particular, section 15.7) of Bruce Schneier’s Applied Cryptography 2e for some discussion of combining encryption algorithms. In short, if you’re very careful about picking your keys, then multiple encryption is at least as strong as the first algorithm you use. If not, all bets are off.
I don’t have that book. But what is the basic mathematical process that makes multiple encryptions easier to crack than one? Seems kind of ironic, encrypting something again makes it less secure. So, do professional hackers actually take an encrypted file and re-encrypt it again countless times in the process of hacking it?
In many cases, E[sub]a[/sub](E[sub]b[/sub](P, K[sub]b[/sub]) K[sub]a[/sub]) is equivalent to E[sub]c[/sub](P, K[sub]c[/sub]), where E[sub]c[/sub] may be weaker than E[sub]a[/sub] or E[sub]b[/sub]. The cryptanalyst doesn’t have to attack E[sub]a[/sub] and E[sub]b[/sub], just E[sub]c[/sub].
~~
If you’re using completely different passwords for each, then multiple layers of encryption won’t do any harm (beyond making it more inconvenient to legitimately access your data). But they usually won’t do any good, either. If you do use the same password or key, then you can inadvertantly weaken your encryption. This is not a method used by attackers, since if they knew what key you used, they could just decrypt it directly.
For a concrete (but very simple) example, let me expand a bit on the joke in my previous post. One very weak form of “encryption” is an alphabet rotation. Rot-1 (rotation by one place), for instance, would map A to B, B to C, C to D, and so on to Z - A. Rot-2 maps A to C, B to D, C to E, and so on. “Rot” can be thought of as the method, with the number as the key. Now, what happens if I encode something twice with this method, using two different keys? I just get another rotation cipher, with a different key. Rot-n composed with Rot-m is just Rot-m+n. And if I’m really unlucky, or choose my keys badly, I might end up doing something like applying Rot-13 twice, which just brings me straight back to the plaintext. In this case, not only am I no more secure, I’m actually less secure by virtue of multiple encryptions.
Well, do keep in mind that ultrafilter’s quote says the total encryption strength (if you pick keys properly) deteriorates to that of the first encryption algorithm in the chain, but doesn’t say it reduces any further. Indeed, as you’ve just pointed it out, it couldn’t really; if re-encrypting the encrypted output of the first algorithm gives you something that’s easy to break (and the re-encryption is, itself, presumably easy to carry out), then the first algorithm is, by this method, easy to break.
That is to say, by definition, the security of the first algorithm is determined by how hard it is to break it; if your re-encryption process gives an easy way to do so, then the conclusion is that the first algorithm wasn’t all that secure in the first place, not that the re-encrypting (with new, independent keys) reduced its security.
(Bolding mine)
If I may just quibble just a tiny bit, assuming the attacker knows your encryption method and key generation process, but just doesn’t know your actual keys, it seems to me that insecurity can arise if the process by which you choose the key for encryption method 1 is non-independent of that for choosing the key for encryption method 2, but if you use independent processes (e.g., generating each separately from a source of true randomness) that just happen, on one occasion, by blind luck, to give the same key or some such, well, that’s no problem. An attacker can’t account for that; he still has to consider all the possibilities (since they would, in fact, be possible) that such blind luck didn’t occur. That is, it would be insecure to generate a single random key K and perform Rot-K composed with Rot-K (in this case, an attacker knows the ciphertext is an even shift from the plaintext, which is 1 bit more information available to him than if you had just done a single Rot-K), but to generate two single random keys K1 and K2 and perform Rot-K1 composed with Rot-K2 is not going to reduce security over the single Rot (it will, in fact, be precisely as secure as the single Rot), no matter what K1 and K2 happen, in the end, to turn out to be, even if they turn out to be the same.
Or, to put it another way: if your encryption process is to always apply Rot-13, or to always apply Rot-0, that’s very very insecure. But if your process is to generate a random K and apply Rot-K, then that will be equally secure, whether or not the K you actually generate happens to turn out to be 0, 13, 7, or anything else. The attacker doesn’t know how it turned out, and has to do the same amount of work to deduce which key it is, whether or not it turned out to be a “nice” one like 0.
Or, to put it another way, the one-time pad is equally unbreakable no matter which random pad actually gets generated, even if the pad that gets generated is all 0s.
Yes, a good encryption algorithm is not supposed to show any patterns in the output; it should be indistinguishable from random data. But what if the encryption algorithm you’re using is flawed and does end up producing somewhat predictable output? An attacker can then use this information to help recover the original document.
The classic example in software is an encryption program which adds a header or magic number to all encrypted files to indicate to the operating system that the file is associated with that particular encryption program.
For more information, see the Wikipedia article on superencryption. (For even more information, see Schneier’s book, as someone else recommended.)
It occurs to me that everything I said in that above post is horribly misleading, in that it holds only for the information-theoretic security strength of an encryption method (how much information/uncertainty does the ciphertext give about the plaintext), while real cryptography largely measures security in terms of computational complexity (how computationally difficult is it to recover the plaintext from the ciphertext). The two are only very loosely connected (for example, with RSA, the public key does in fact contain enough information to recover the private key, and is thus information-theoretically completely insecure, but breaking RSA is generally believed to be infeasible in terms of computational complexity, and thus secure in that sense), and, conceivably, random choice of unfortunate keys could lead to a computationally easy to break special case of an encryption method which is computationally difficult to break on average (thus allowing re-encryption to reduce security from a complexity point, though not from an information-theoretic perspective). I think.
So, yeah. Sorry for injecting confusion. I’ll slink away now…
Professional cryptographers have yet to successfully make a one-time pad, so at this point it is fictional. I’m guessing your wondering which algorithm to set in one of your applications that gives an option. Generally, if AES256 is available, set that one. It is a highly reviewed algorithms with more than enough bit-length. If you find that it is too slow on your computer for your liking, then use blowfish. Furthermore, the most important thing to remember is to use a very strong password. If you want to know about passwords, then read Schneier’s Wired article about password security (in which he covers all you should need to know).
One-time pads have been in use for more than half a century. What are you talking about?
This is nonsense. Anyone with a pencil, paper, and elementary knowledge of arithmetic can devise and use a one-time pad.
Wouldn’t hurt to have some dice…
You still need a good source of random numbers. Sure, you can just pick numbers at random, but are you really willing to bet that the NSA can’t find some subtle patterns to it if given enough to work with? And for computer communication involving large files, you’re going to need a lot of random data. Are you going to roll a d8 a million times to encrypt a 1 meg. file?
I built my own true random number generator from a Geiger counter, a smoke detector, and an old computer. I was inspired by John Walker’s HotBits random number generator. It didn’t cost much to build and it produces random numbers at a fast enough speed for most uses.
Would it be possible to generate a one-time pad using Diffie-Hellman key exchange?
I’m partially aware of the expense involved in generating an adequate one-time pad and the expense in DH key exchange. But it seems to me that if you want the security of the one-time pad, the added expense of DH key exchange will be well worth it.
(Superbrief summary of DH key exchange, from a class over 5 years ago, so I may be a bit off: Somewhat like public-key encryption uses two related numbers to asymmetrically encrypt and decrypt a message, DH key exchange uses two related numbers to generate a symmetric encryption key – without transmitting the numbers or the key.)
Diffie-Hellman key exchange is secure in that it’s conjectured (though not proven) to be computationally difficult to break. But it still can be broken, just not necessarily in a computationally efficient way. The proper one-time pad, however, is the strong possible encryption because it’s impossible to break, computationally feasibly, computationally infeasibly, hypercomputationally, whatever. With a truly securely exchanged one-time pad, one can simply never gain any information about the plaintext from the ciphertext, no matter what one tries.
So, necessarily, using DH key exchange to get a shared pad for encryption will be weaker than the proper one-time pad, in that it will be possible to break if you are willing to wait a long, long time. But the resulting system might still be very secure. It just wouldn’t be the holy grail of security.