The recent threads on encryption reminded me that I had some unanswered encryption questions.
I know just enough about encryption to show my ignorance… and there are a couple of questions I’d like to ask.
I know that longer keys make the text that much harder to break: 128 bit encryption is much stronger than, say, 64 bit encryption. And I know that fast computers can “break” some encryption keys.
But here is where I get a little confused:

How do they know that they have discovered the correct key? Is it just by decrypting the message with this key and seeing that it’s now understandable English (or another language)?

Would a “2stage” encryption prevent them from finding the correct key? For instance, use 1 stage to encode the message into something not identifiable as a language, then encrypt that result. Then they wouldn’t be able to identify the correct key, even if they found it.
2a. I would think 2 completely different encryption algorithms applied serially to the message would make the message almost impossible to break. But this is such a simple idea that I’m sure others have thought of it.
2b. Is there a reason why this 2 stage algorithm wouldn’t work or wouldn’t make the original message harder to decrypt?
2c. Are there any known encryption methods that use this 2stage approach?
2d. Is there any reason that this method couldn’t be expanded to a 3stage, 4stage, etc. method giving that much more safety?
 If you serially encrypt a message twice, once each with 2 different 64bit keys, does this give encryption equal to, less than, or greater than a 128bit key?
I did some work on data compression and found that a 2stage data compression algorithm, runlength encoding followed by LZW, gave remarkably good compression on our data sets. This is where my 2stage encryption idea comes from.
Thanks for your help,
J.