As I understand it - encrypted message systems generate a different key for each client. they also rely on one-way encryption (public-private key) where the ability to encode does not give one the ability to decode. It’s as if each person using Enigma had a differently configured machine and different pad.
(Also, as I recall, Enigma encoded in blocks of 5 characters?)
An 8-character password with upper/lower case, numbers, and punctuation would mean about 75 characters to the 8th power. Make it 10 characters, and any brute force attach would take 75x75=5625 times longer. But - large prime encryption, with 256 digits or more, theoretically means brute force attacks would take longer than the age of the universe. Any crack would rely on flaws in the algorithm.
For example - dictionary attacks, easy. I tried one in the Pentium days as an experiment, exploiting the Microsoft LAN Manager password flaw, and had about 1/3 of the passwords within 5 minutes and then brute force found most of the rest within 2 hours. LAN Manager would store the password also in upper case only encrypted in 8-character segments; which worked in the 386 days but the feature needs to be turned off since then.
I have not heard of any significant attacks that actually decoded a data stream for properly encrypted internet data. usually the hack in by breaking into the device at one end or the other.
So - getting into the phone or computer - somewhat easy, aided too by social engineering and people who choose poor passwords. Decoding data between two properly encrypted devices - virtually impossible.
What made me understand this is not calling it public key. I think of it as public lock - private key. I give you the ability to secure your message with a lock that my key will open. I can hand those locks around as much as I want, thousands, millions, because it’s very hard to reverse engineer the key. It’s easier to build a million random keys and try them than try to take the lock apart to design a key to fit.
I’m sure someone will come along to tell me my understanding is all wrong, and that’s ok.
Even worse, it was 7-character segments. 8-characters would have been pretty strong against brute force attacks, but 7 was very weak, especially if the password was less than 14-characters. In the case of an 8-character password, you were brute forcing a 7-character password and a 1-character password. I think on the days hardware it was something like 23 hours max, maybe two days, to brute force all 7-character passwords.
It becomes even more difficult with perfect forward secrecy. In that case, decryption of old encrypted data cannot be performed, even if the secret keys are obtained. This, for example, means that if the government records https traffic, they can’t just go to the company hosting the https server and get the keys.
No, that’s pretty good. “Public lock, private key” is a good description of public key cryptography. The ambiguity is that cryptographers use the word “key” to mean “small piece of data involved in encryption” which encompasses both the keys used in encrypting (the public one, the “lock”) and decrypting (the private one, the “key”)
It’s also a bit misleading in that it can work either way: The owner of a private key can prove their identity (or at least, their knowledge of the private key) to encrypt a message, which can then be decrypted by use of the public key. In that case, you’re effectively securing something with a key, and then opening it with a lock. But that’s just a nitpick.
And it should be acknowledged that there is one sense in which you can decrypt a well-encrypted message: If you have reason to think that you know exactly what a message says, then you can test that exact message by encrypting it, and see if it’s the same as the encrypted text that you have. This usually isn’t practical (any change at all, like a single letter or punctuation, will result in a completely different encrypted message), but it could be used to verify a he-said-she-said situation.
As naita mentioned, the message-exchange key is even more specific than that. Usually, the public/private key pair are used only to generate and exchange a randomly generated session key that’s then used to perform more conventional symmetric encryption only for the lifetime of that “session.” The definition of “session” varies by application, but the point is meant to be that the key used to encrypt the actual plaintext has a limited lifetime. In your analogy, you get a differently configured enigma machine with a different pad each session of each person.
Not only can governments not easily decrypt the contents of good commercially available encryption, but typically, for a well-designed system, neither can the company providing the encryption.
You can know exactly how it works, but unless you have the key, you won’t be able to crack it, absent a flaw in the system.
This may have been mentioned, but an important crypto engineering concept here is perfect forward secrecy. This means that, if the design of the commercial encryption system is not incompetent, cracking the server’s secret keys will not enable the attacker to read all the messages they have intercepted.
The way I describe it is, the front door to your house is protected by a private key cryptosystem. There is (normally) only one key that will open the lock, and if you want to give someone access to your house, you hand them a copy of that key.
If the front door to your house was protected by a public key cryptosystem. Every person would just always carry one key at all times, and the shape of that key would be publicly known. If you want to give someone access to your house, you reconfigure the lock on your front door to open when that person’s key is inserted (meaning that the lock has to accept many keys if more than one person has access).
(Apparently some hotel rooms work like this in some cases)
Yeah, I have a relative who was super-paranoid about how they had a firewall, and turned their PC off and unplugged their router every night, etc… because they were afraid of hackers getting their financial information.
They weren’t very happy when I pointed out that they didn’t have a password on their desktop, and that the easiest way for them to steal that stuff would be to bust out their huge plate glass front window when they’re not home, go into the room with the computer, start it up and copy it all onto a thumb drive. Or just walk off with the computer and scour it for juicy bits at their leisure.
I always thought of it conceptually as something akin to one of those big blue mailboxes. Basically anyone with the public key can open up the hopper and put something in, but once it’s in, only the person with the private key to the lower chamber can actually open it up and get it out.
So you can send out as many copies of the public key as you want- all they can do is open up the hopper and let you put something in. But to get it back out, you have to have the private key.
It’s not a perfect analogy, but it’s one that allows for two separate “keys” without having to conceive of it as a lock opening a key or anything like that.
Yeah, I like the ‘public lock/private key’ as a completely nontechnical analogy. It is rendered strange by the fact that each key is actually both a lock/key for the complementary key/lock, but I think all analogies are going to break down at some point - I can’t think of a real-world physical object that behaves in the way a key pair does
I have seen this on Usenet, but it is not encryption that is happening, rather a poster creates a private-key signature code on the post content that holders of the public key can use to verify that the post was in fact made by the (or a) holder of the private key.
It is the mathematical concept of encryption, but the reason for it is not to create private text, but instead to create a verifiable signature of the author.
I haven’t usenetted in forever, but when people did that, they’d have the plaintext first, and then the encrypted text after.
So, not traditional encryption, from a definition of encryption being used to create secret text.
To nit upon this, what usually came after was an encrypted hashcode of the plaintext, not the encrypted plaintext itself. This was done for two reasons:
The hash code has a fixed length, so the signature would have a fixed length.
The hash code is effectively random characters, erasing the correlations in the plaintext upon which a lot of classical cryptographic attacks are based. I don’t think that there is any evidence that any of those attacks would be effective against RSA or the like, but it is relatively cheap extra protection. (I’ve seen the advice to compress any file you are going to encrypt first, for basically the same reason).
That’s called a cryptographic signature. It works like this:
The message is hashed, or turned into a fixed-size block of data using a complex algorithm such that it’s effectively impossible to craft an input which will come out to a known output. The best you can do is guess-and-check, and that kind of thing would take longer than the Sun will last.
The hash is encrypted using the sender’s private key. The encrypted version is turned into text using a known algorithm (Base64, for example) and appended to the message. This is called ASCII-armoring and is done to keep it from being mangled.
The recipient can receive the message, hash it, turn the ASCII-armored encrypted hash back into the same data that came out of the encryption, grab the sender’s public key, and use it to decrypt the encrypted hash. If the decrypted hash is the same as the hash the recipient generated, the recipient knows the message was sent by someone with access to the sender’s private key and that the message wasn’t changed in transit.
The point is to prevent forgery and modification, so you have a good reason to think you’re communicating with who you think you’re communicating with and that you’re getting the messages they intend to send you.
The “public key” is like “I hand you an open padlock”. You lock the box full of goodies you are sending me - ie. encode with the public key.
But just because you have the padlock does not mean you have the key to open it.
I have the padlock key (the “private key”). I can open the box by unlocking the padlock, and nobody else can.
(Ignoring the fuss about being able to analyze a padlock to find the shape of key it accepts… the digital equivalent of that could take the life of the universe.)
If I understand perfect Forward Secrecy, You send me a box using a padlock with a padlock and key 9for me) in it, I open it and send you another padlock to send your next set of messages. I also in the box send you a key for the padlocks I’ll use to send to you. You open my sent items with the key I sent you, and use the padlock to send me things, which I open with the key you just sent. We throw those keys and padlocks away when the conversation exchange is over.
A short signature hash can’t be decrypted back to the original text - all you can do is verify that the full text does indeed match the signature - in practical terms, it’s more like the utility of a checksum than encryption of a message that someone can later decrypt.
This does make it possible (although difficult) to find hash collisions - that is, another piece of plain text that hashes to the same output as the first - but in addition to being difficult (basically bruteforce trial and error), it’s most likely that the plain text you would find that hashes to the same result would be complete gibberish.
Practical example of finding two non-gibberish docs with the same (SHA-1) hash: https://shattered.io/
This is an enhanced version of a birthday attack though, so it is not quite “find a document that has this hash”. And also there are much better hash algorithms than SHA-1 to use.