Help fight my ignorance on encryption

The recent threads on encryption reminded me that I had some unanswered encryption questions.

I know just enough about encryption to show my ignorance… :slight_smile: and there are a couple of questions I’d like to ask.

I know that longer keys make the text that much harder to break: 128 bit encryption is much stronger than, say, 64 bit encryption. And I know that fast computers can “break” some encryption keys.

But here is where I get a little confused:

  1. How do they know that they have discovered the correct key? Is it just by decrypting the message with this key and seeing that it’s now understandable English (or another language)?

  2. Would a “2-stage” encryption prevent them from finding the correct key? For instance, use 1 stage to encode the message into something not identifiable as a language, then encrypt that result. Then they wouldn’t be able to identify the correct key, even if they found it.

2a. I would think 2 completely different encryption algorithms applied serially to the message would make the message almost impossible to break. But this is such a simple idea that I’m sure others have thought of it.

2b. Is there a reason why this 2 stage algorithm wouldn’t work or wouldn’t make the original message harder to decrypt?

2c. Are there any known encryption methods that use this 2-stage approach?

2d. Is there any reason that this method couldn’t be expanded to a 3-stage, 4-stage, etc. method giving that much more safety?

  1. If you serially encrypt a message twice, once each with 2 different 64-bit keys, does this give encryption equal to, less than, or greater than a 128-bit key?

I did some work on data compression and found that a 2-stage data compression algorithm, run-length encoding followed by LZW, gave remarkably good compression on our data sets. This is where my 2-stage encryption idea comes from.

Thanks for your help,
J.

In general longer keys are better, because if you are going to try a brute-force attack (trying every possible key until one works), every additional bit doubles the time it will take to break. But most cryptanalysis is about finding flaws in encryption or key-generation algorithms themselves – if you can, you may find a shortcut that will allow you to eliminate a large number of potential keys to try.

Basically, yeah. It’s pretty easy for a computer to look at a chunk of data and make a reasonable guess as to whether it’s English (or Russian, or whatever) as opposed to gibberish.

It makes it harder. DES is an example of an encryption algorithm that is often layered this way. DES keys are only 56 bits. Computers got faster a lot quicker than most people expected, and breaking a DES key by brute force these days is trivial. So the 3DES standard takes three separate 56-bit keys and encrypts the plaintext three times. This essentially tripples the key length without having to program a new algorithm.

I dunno the answers to the rest of your questions, but I’m sure there are a lot of people who have pondered them.

Double encryption doesn’t always strengthen a cypher - sometimes it actually weakens it.

Very little data requires impossible-to-break encryption. All that is required is to make the effort required to break the encryption worth more than the data itself. There are essentially unbreakable encryption schemes (one-time pads), but then the human element becomes the weak point. Witness all of the “social engineering” exploits going around.

From the cryptanalyst’s point-of-view, your 2-stage encryption may be mathematically equivalent to 1-stage encryption with another key and/or algorithm. It may not be any more secure than 1-stage encryption. To give a simple example, concatenating two monoalphabetic substitution ciphers just results in a cipher that is equivalent to one monoalphabetic substitution cipher with a different key. The cryptanalyst is not obligated to go backwards through each step that was used to encrypt the data.

If the 2 stages are the same algorithm, yes, I can see that. But what if the algorithms are different? For example, using a monoalphabetic substitution cipher for stage 1 and, say, 128-bit encryption for stage 2. How would a decrypter know they had found the right 128 bit key?

J.

The 1-stage equivalent algorithm won’t necessarily be the same as either of the algorithms used. It doesn’t even have to be some combination of the two (though that is probably what you’d hope, since it obviously can’t be more complex than that). There may be something one does that when applying the other causes it to be simplified.

In your example, the person decoding will never notice the substitution cipher - they’ll just be using a slightly different key, since in that case the pattern is identical.

But can you be sure that it’s the right cleartext message? For example, if applying guessed_key to an encrypted message yields the cleartext “attack at dawn”, how can you be sure that applying correct_key to the message wouldn’t return “attack at nine”?

IANACryptanalyst but IIRC that’s only really a problem with “one time pads” where to crack them by bruteforce you need to try every possibility, which reveals every possible cleartext message (which is essentially infinite number of messages).

Two examples to illustrate the point:

Encoding a message twice with Rot-3 (substitute A with D, B with E, etc.; the classic Caesar cypher) is simply Rot-6. There’s no change in encryption strength.

Encoding a message twice with Rot-13 is fatally weak, as the result is the original message.

See unicity distance. For a sufficiently long message, which usually isn’t very long, the solution is unique.

You’re right, a key can be found to turn any set of gibberish into any cleartext in any language you desire (Mind you, the key might wind up being as long as the message itself). The working assumption is that some part of the encrypted text will be known already. When the key being tried yields that known text, the rest of the solution can be trusted (more or less).

The known part need not be much. I vaguely recall as little as eight bytes would suffice – part the header of a Word document would be plenty. One of the reasons we could crack the IJN’s JN-25 code during World War II was that a report from a particular entity would always start with the same greetings before getting to the meat of the report.

You’re also assuming that our bad guy doesn’t have access to the decryption system (software or instructions or whatever).

However, a lot of the time, they do. I.e. they can get hold of the decrypting software, either because it’s publicly available or because they can find it.

Similarly if they can get the encryption software (which, depending on we’re using encryption for, can be a requirement of the system) it’s easy to work out what types of encryption are being used.

In short it’s normally quite easy to work out what’s going on. Particularly because people don’t typically use ‘new’ or ‘unknown’ systems as there’s no point in doing something that could introduce a potential weakness when there’s plenty of secure encryption already out there.

Yes, I understand that double encrypting with same or similar algorithm doesn’t strengthen the encoding, and could weaken it.

My main questions regard double encrypting with different algorithms. For example, let’s posit a stage 1 algorithm a little more complicated than a monoalphabetic cypher.

Let’s assume our code has 64 possible characters: 26 upper case letters, 26 lower case letters, digits 0 -9, space, and period. Arrange the characters in some order. For illustration, let’s put them in this order:

<lower case letters>, <0 - 9>, <space>, <period>, <upper case letters>

We encode a character by using its index number in the list: so “a” is 0, “b”, is 1, … “Y” is 62 and “Z” is 63. Just to add a little spice, after each character is encoded, we rotate the letters to the left by 1 character: so after encoding 1 character, “b” is 0, “c” is 1, … “Z” is 62, and “a” is 63.

So we encode our message using this stage 1 algorithm into 6 bit numbers, concatenating the numbers into a stream of 0’s and 1’s.

For our stage 2 algorithm, let’s use 64-bit encryption.

Our stage 1 algorithm has the advantage of being very simple to implement in a computer program. It also avoids the trap of always encoding the same letter with the same number. So statistical searches for the most common letters will not work. It also has the benefit of not being recognizable as plain text.

So we take this encoded message, not recognizable as plain text, and encrypt it with a completely different algorithm.

Here is my main question: Since it isn’t possible for the decrypter to recognize the message as plain text even if they guess the correct key, wouldn’t this 2 stage encryption be MUCH stronger?

Thanks,
J.

That’s why you need to apply it three times.

Among the usual assumptions for testing the security of an encryption protocol are that the algorithm (not the keys, but all of the fixed parts of the protocol) is known to the codebreaker. Obviously this is not always the case, but it’s a reasonable starting assumption in many cases.

Under this assumption, a two-stage encryption process with keys of length k[sub]1[/sub] and k[sub]2[/sub] is (obviously) exactly equivalent to a single-stage encryption with key length k[sub]1[/sub]+k[sub]2[/sub] (where now the encryption algorithm is just the composition of your two stages). So your new algorithm is, at best, as secure as the sum of the key lengths of the stages. (Here I’m equating “key length” and “security,” which is not really true. Some algorithms have effective key lengths much smaller than the amount of information actually required to specify the key. Apply the appropriate scaling to all the keys to get a real estimate of “security.”)

If the first stage, at least, is a strong encryption algorithm, then the security will probably be at least that of its key length. (Otherwise, that algorithm could be broken, at least in part, by encrypting it with your second stage to weaken the overall encryption.) This is probably true for the second stage as well; it requires some further assumptions about the encryption, but these are fairly weak. So unless you choose pathological, or very weak, algorithms for your stages, you at least are unlikely to make things any worse than an individual stage.

Now it’s probably true that if your two stages are both strong algorithms then the composition of the stages is even stronger. However, this is generally difficult to prove. The example of DES (the 56-bit Data Encryption Standard) and 3DES (a composition of three rounds of DES, sometimes with only two distinct keys in the sequence DESe) was mentioned earlier. 3DES was widely believed to be more secure than DES, but it took several years for this to be proved. The problem, basically, is that it is conceivable that two rounds of DES are equivalent to a single round with a different key; that is, DESd = DESf(d,e) for some key-mapping function f. If this were the case, the keyspace of DES would form a group. This is what goes wrong with the composition of two Caesar ciphers, for example; the keyspace is just the group of integers modulo 26 under addition.

The particular example you give (post #14) is much weaker than the general case, however. Your first stage has no key; it’s just a fixed mapping of bits to bits. Its only goal is to frustrate a plaintext-recognition algorithm applied to a candidate decryption. And it will frustrate the simplest such algorithms. However, there are plenty of more general algorithms that will easily detect this sort of initial jumbling. A very simple measure is a block entropy estimate (the entropy of a histogram of block frequencies); for a correct decryption of your second stage, this will register entropies, on blocks of 6n bits, much less than the expected ~6n.

This doesn’t directly address the OP, but NSA and NIST have recently put out publications indicating a departure from AES, as it’s no longer (er… not for much longer, depending on who you listen to) computationally infeasible to break it (which is the goal of encryption - not “impossible,” just computationally infeasible) and key lengths are getting out of hand. NSA wants all of the government moved over to the new standard by 2010.
The vulnerability component is the product of ever-advancing technology and developments in the crypto world.
The key length issue is also tech-driven, in that encryption and decryption take time to process. Long (like 2048-bit) keys are bandwidth and hardware hogs.
The new standard will be based on elliptical-curve cryptography (ECC), bringing acceptable key lengths back to where they were thirty years ago, but with greater security. The aforementioned 2048-bit key (in an RSA/AES scheme) would be reduced to 224 bits with ECC, but would provide equivalent security. That saves a lot of back-end hassle. The link below is to an NSA article. It’s pretty interesting stuff.

Linky-link:
http://www.nsa.gov/ia/industry/crypto_elliptic_curve.cfm

To cite a classic example, that used when decrypting messages from the WW2 enigma encryption device:

The manual for the device suggested that to enhance security, messages should start with a nonsense word, “Sonnenschein” (sunshine) was given as an example. True to German genauigheit (exactingness) an ovewhelming majority of the enigma messages started with the word Sonnenschein.

In addition, most of the messages ended with the words “heil Hitler”. Thus it was only reqired to test a key on the first few, and last few letters of a message, and rather than analyzing for “Germaness” of the resulting plain text, these specific words could be checked for.

The article from the NSA that you cited, talked about a move from RSA/DH to ECC for public key cryptography. It didn’t say anything about AES being obsolete or deprecated.

To follow up on my previous message, NSA policy for commercial cryptography is outlined in Fact Sheet NSA Suite B Cryptography. AES is the only symmetric key algorithm listed, and is considered adequate for the protection of classified material at the SECRET and TOP SECRET levels, providing that it’s properly implemented, tested, etc.