Can you prove a decryption without revealing the key?

Say Alice uses Bob’s public key to encrypt a message for Bob’s eyes only, and sends it off to Bob over an insecure network. Eve, and in fact a whole bunch of Eves are watching and each bags a copy of the encrypted message. They all know Alice sent it to Bob, but they don’t know the secret contents of the message.

One of the Eves, let’s call her Mallory, has, through other methods, obtained Bob’s secret key, and she successfully decrypts the message intended for him. For her own reasons, she decides to tell some or all of the other Eves what the message was, but she still wants to keep Bob’s key to herself.

How would the other Eves know whether or not Mallory was lying about the contents of the message? Is there a way for her to prove that particular cyphertext decrypted to a particular plaintext? I know she could easily prove that she has Bob’s secret key, by using it to sign other plaintexts. But she needs to prove what was in this particular message that Alice sent to Bob.

Is this something that can (or can’t) be generally done with commonly available public key cryptosystems? Are there any cryptosystems out there at all that can do it? If nothing currently exists that can do it, could one be designed from the ground up to have this inherently baked in due to the math behind it?

Would the Eves be willing to allow Mallory to demonstrate that it works? That is, an Eve can grab a new copy of the encrypted message and have Mallory unencrypt it in front of them.

I can possibly come up with a way for Mallory to prove she has the key without giving revealing it. But since the message in question is already in plain text, I can’t think of an obvious way to prove that this message came from that encrypted file. I mean, I supposed there’s probably some metadata or something in the file that’s all I can come up with.

Would the Eve’s accept a new copy from Bob of the plain text file to show that it matches the one they’re asking about? (Does that make sense?)

The plaintext that Mallory claims is equal to the encrypted message can be re-encrypted with Bob’s public key (which is the publicly available process that produced the known ciphertext in the first place), which can be verified by anyone to be the same as the ciphertext.

That’s more elegant than what I was thinking. I came up with one or two ways that Mallory can prove that they have the key, but nothing to prove that the already unlocked message was previously locked with Bob’s public key.

I can think of one potential issue in that your method relies on, at the very least, Mallory not knowing the original contents of the message.

Not if the encryption “salts” the message with some random additional data to obscure patterns (e.g. frequently-used headers).

Cite that that’s actually a thing? Salting is used for hashing passwords. I’ve not heard of it used for public key cryptography.

But even if it does: whatever salt there is comes out as part of the input text. Mallory can just include it in her declared plaintext. Either way it proves she has the private key (or has completely cracked the encryption algorithm and can produce collisions at will).

I’m not sure what you mean by this. Plaintext gets encrypted to ciphertext, which gets decrypted to the same input plaintext. The output that Mallory has is the same input that Alice used. Otherwise encryption would not be very useful.

But if your algorithm grabs 50-100 characters of some random salt to obscure that 75% of messages starts “Dear Admiral So-And-So, I hope this message finds you in good health.”, you won’t be able to prove you have the original plaintext by showing the hashes match.

Because when Alice encrypted the message, it stole the ingredients from a random recipe on the Food Network website (this one happened to be for deviled eggs) for the header, and when Bob decrypted it, he stripped them out.

But when Eve decrypts the message she has a bunch of ingredients, then a message to Admiral So-And-So. But she can’t feed ingredients(eggs)+message into the encrypter with Bob’s public key, because it will tack on ingredients for apple pie, then encrypt, so your hash will be for the message ingredients(pie)+ingredients(eggs)+message. And if Eve strips out the ingredients and runs just the plaintext, it will come out ingredients(pie)+message.

(I think that explanation can be followed by someone who isn’t me?)

Thanks, this is the answer. It’s pretty obvious once you think about it, and yes I can see that this works no matter what salting was done by Alice’s software. The other Eves who need to verify would have to make sure their own software uses the provided salt and not fresh salt.

This is not how public key cryptography works.

“Grab the text of a random recipe and append it to plaintext before encrypting” is a thing you made up.

And even if it were a thing, it would not actually prevent this. When salts are used, the salt is known to all parties. The point of a salt is to make it harder to brute force short plaintexts (like passwords), not to make the plaintext uncertain.

Padding is a real, and important, thing, but if you can decrypt the message you can then reveal what was the padding.

My post was based on my assumption that Mallory needed to prove that the plaintext file they had came from that specific encrypted file.

If the source of the plaintext is irrelevant and the important part is just that it’s correct, then re-encrypting it to check would make sense (and what I said earlier would be moot).

Does that make sense? IOW, my point was that if Mallory had access to the plaintext, they could present it as the now decrypted file, even though they didn’t do anything.

I suspect (perhaps naively) that a system could be designed that way, specifically to defeat the kind of checking you described, if there were a good reason to.

I already mentioned this in my OP, but let me restate it: if Mallory needs to prove she was really able to decrypt using Bob’s secret key, all she has to do is use that key to digitally sign other plaintexts chosen by others who want her to prove this fact.

It is… maybe plausible, but I expect that we’re well into “it’s trivial to design a cryptography system that you personally are unable to break” territory.

The main issue I see with such a system is that it can’t really be random and generally usable. Software can’t do random, it can only do pseudorandom, which means that if you can control the seed, you control the random sequence. And anyone running encryption software on their own hardware can likely figure out a way to control the seed. Generally, software that uses pseudorandom numbers explicitly provides a way to control the seed because software development is very difficult when you can’t force the software to behave predictable. But let’s assume that the makers of this software didn’t do that. The seed has to come from somewhere external to the software itself. And all the somewheres are controllable by whoever is root on the computer the software is running on. Commonly used seeds are things like the time, but the time is something the OS supplies, and Mallory can make her OS say it’s whatever time she wants.

But in asymmetrical encryption, the secret key can’t be used to encrypt. Well, it can, but then there would need to be a secret-secret key, and I’m not familiar with any public/private/superprivate encryption algorithms.

Bob’s public key is what’s used for encryption, and everyone already knows that. The answer is what iamthewalrus_3 said in reply, but I’m not sure how feasible it would be to get the exact original text so that the two versions of the encrypted message are identical. E.g., if it’s an email message there may be headers that would need to be included in the plaintext, or it might need to be put into a specific format.

This also presumes that Mallory knows the encryption process - i.e. the algorithm that employs the keys.
And… that the public key is public enough that all the Eves know it.

However, generally, when someone relies on an encryption key they don’t monkey with it (customize it).
I.e invert every odd byte before encrypting, or after encrypting. or - XOR with “The Quick brown fox…”; insert a random byte every 7th byte. This is the sort of thing salting can do too. Or… encrypt twice…

None of these is a viable encoding in themselves, but combined with a full-on encryption process they would slow down anyone who was just being a script-kiddie using off-the-shelf algorithms. But the algorithms for public key encryption (usually) don’t include “add random data or scramble more” - that’s up to the person encrypting or their system.

So the person doing the decryption (Mallory) would have to know or work to decode the process. Just adding Julia Child stuff to the front or back is the easiest version of this sort of obfuscation.

And then, Mallory would have 2 texts:
“This is what the contents decrypt to.”
“Once cleaned up in the following manner, this is what the message says.”

The first should re-encrypt to the core message as intercepted.
The second should be an intelligible bit of intelligence.

I’m not sure if you are making a point about encryption vs. digitally signing something here, or if you’re missing the point of what Ponderoid writes, which uses a common feature of asymmetric encryption. To expand and add detail to what they wrote. If Mallory takes the declaration of independence, digitally sign it with Bob’s private key (which is encrypting it, or a hash of it), then everyone can confirm with Bob’s public key that she has the secret key from the pair.

Or if you are paranoid about Bob having a whole library online of things he’s signed, you send Mallory a personal email that Bob can’t possibly have and she signs that.

Many messages starting with the same header was a vulnerability for Enigma, since the encryption of each letter depends only on the settings and on the letters that came before it, so two messages whose plaintext started the same way, with the same settings, would also have ciphertext that started the same way. Modern encryption, however, is much more sophisticated than Enigma, such that if you change even a single punctuation mark in the plaintext, you’ll get ciphertext that looks completely different.

That said, there are a variety of ways that an attacker might get a copy of the plaintext, or of something that they think might be the plaintext, and want to verify it. For instance, maybe Mallory was visiting Alice, and saw the composed-but-not-yet-sent email on her computer screen, and snuck a picture of it. Maybe she saw the email almost finished, with just a couple of words missing at the end, and there’s a small list (say, only a few billion) of plausible ways for the message to end. Maybe the message is expected to be something extremely short, like just “yes” or “no”. To deal with such cases, there might be value in an encrypted messaging program to append random padding to the end of a message before passing it through the encryption algorithm (padding which the receiving program would know to discard after decryption).

This is not correct for say, PGP. There are two keys, public and private, and both keys work both ways.

If Alice wants to send Bob a message only he can read she uses Bob’s public key to encrypt it and Bob uses his private key to decrypt it back to plaintext.

If Alice wants to send Bob a message only he can read and prove to Bob she was the one who sent it, she does the above steps and adds a note, Hi. I’m really Alice and encrypts that using her own private key. Bob receives the message, uses Alice’s public key to decrypt the note, then his private key to decrypt the message. It’s an electronic signature.

In fact, anybody in the world can verify that the message was from Alice by using her public key to decrypt the note but without access to Bob’s private key will not be able to read the message itself.