Years ago all Unix systems were delivered with a lot of password levels for various functions but the master password to change all passwords was “********” (eight stars). This was handy for the techs sent out to the various sites to debug system software.
(Naturally it was mentioned in the documentation that the sites should change this first thing, but they never did.)
Personally, for sites like the board, I prefered a blank password, which was allowed. I still don’t see the need for a password here at all. To misquote Othello, “He who steals my good name steals trash.”
In a similar vein, all early Pitney Bowes stamp machines had an intial password of “6666”, with severe instructions to the customer to change it first thing.
Naturally, when we went out for servicing that was the first thing to try, as no customer ever changed it.
You see, the problem is you guys are making assumptions about what I’m encrypting and how many times and under what circumstances. Plus the fact that I’m purposefully not telling people exactly what I’m doing anyhow. That’s the second thing they teach you.
There’s all types of random. It’s not random like radio telescope noise, I’ll give you that. But that’s not quite the key point…
Can you provide a cite that cascading encryption decreases the security level, compared to encryption with just the first routine? Because I’ve looked and can’t find it. Plus the fact it doesn’t make much sense to me. Otherwise, by extention, the first thing someone should do in trying to crack a code is to re-encrypt it. Does that make sense?
VB 2.x uses an MD5 hash. Are you actually saying by re-encrypting the MD5 hash I’ve made it less secure than the MD5 hash alone? At worst, it should be equal security. Can you give me cites as to how that works?
How is it less secure than just using the first one? Pretend that “interception” of the data from one routine to the other is not possible barring Van Eyck freaking or something.
I would if it applied to my situation, which it does not. The first thing they teach you is “know the situation before developing the solution” or words to that effect. With respect to you and he, and casting no flames or criticism your way as to your accomplishments and intelligence - you don’t know my application, you don’t know my situation, you don’t know my procedures or processes, and you haven’t shown me any evidence that re-encrypting a hashed or encrypted piece of data somehow makes it less secure in all situations. Thus, while your advice is sound from a general standpoint, it does not apply to my situation. If you knew my situation I feel you would agree that I have in fact made things safer, not less safe.
Besides, this is off-topic from the point of my post, which was to explain the difference between VB3 and VB2 password hashing/encryption (the two words often used synonymously). But I still would be honestly curious to see cites as to just how re-encrypting an encrypted piece of data makes it less secure than the original encrypted data itself. Maybe it’s one of those things like relativity, where I can work all the equations but still don’t “instinctively” understand it?
When two random processes interact, very frequently they do so in such a way as to produce a distinctly nonrandom distribution of outcomes in the form of the familiar Gaussian, or bell, curve. Take dice, for example. Rolling a single die produces a nice random distribution of values. However, if you roll two diceand add their values, you end up with a neat Gaussian distribution of values, with the total 7 being far more likely than, say, 2. Cascading encryption algorithms can have a similarly disasterous result.
QED, with respect I understand and agree completely with what you mean with dice but how is that extensible to encryption?
Is someone going to be able to show me that if I PGP encrypt a file, and put it inside a PGP disk with another, different, passphrase, I am somehow now less secure? Or if I take a vBulletin password which is MD5 hashed, and PGP it, it is now easier to crack? Once again, with polite respect, that is not bourne up by any of my encryption references, so I’m looking for someone to educate me and show proof that that is the case. I may not understand the math, not having gone much past PDEs and integral transforms, but at least I can get the gist of it I hope…
The math on these things is simple. I don’t know about the coding though.
From the math perspective, if your combined algorithm produces fewer possible encrypted results than just the first algorithm, then it is less secure.
For example if you take the rule that your 3 digit pin will be (A)squared and passed that way through your firewall to the next computer. It appears as a six-digit number, but there are really only 1,000 possible answers not 1,000,000.
Now say you take that (A)answer and (B)“fold” it by adding the two halves together.
(This sounds stupid, but since it is easy to do in computer language, it is popular).
So now it looks like a 3 digit answer.
But, in reality it is less unique than the original pin. While there still appears to be 1,000 possible answers, there are actually fewer, since some duplicates will arise out of the folding.
So the (A)squared & (B)folded answer is less unique (i.e., more easily guessed by trila-and-error) than the original pin.
It’s not cascading the algorithms that’s the problem, it’s cascading the code. The lengths a great crypto programmers goes to in order to ensure that no critical data is left lying around where it can be sniffed out is phenomenal. Someone changing that code (or not using it right) destroys all that and the security of the original algorithm is now irrelevant.
The all time classic example of a bad “encrypting twice” design is running something thru DES twice with different keys in order to (supposedly) effectively double the key length. Crytpo experts laugh themselves silly at anyone proposing this. (Extended key DES uses whole other tricks but the effective increase in key security is still not proportional to key length.)
Plus, you have to admit that incorrectly using the term “one time pad” is just flat out going to send off major alarm bells in other people’s minds.
So I don’t know the details of how you are doing things. But I do know that you aren’t using the term “one time pad” right. From that point on, I pretty much don’t need to know details. Sure that’s blunt, but this is security we’re talking about.
If this were about energy content of coal, I’d trust you completely. (If I’m remembering the right name switch.)
(Now to just end on a lighter note, here’s a tale from the “old days” in computer crypto.)
Diffie and Hellman, the guys who invented the idea of public key crypto had a really hard time coming up with an actual method. They kept trying various schemes based on Knapsack or TSP. Eventually Adelman got really good at destroying their proposals. Sometimes he could break a scheme in just days. The problem? Diffie and Hellman weren’t really crypto guys. But then again neither is Adelman. He’s really just a number theory algorithm guy. Just because some people are big names in the field, doesn’t ensure that their work can be relied on. You have to be better than top notch.
I don’t know if that answers my question; please bear with me. Say, just for argument sake, I take a pre-existing MD5 hash delivered to me, and PGP encrypt it. I deliver both encoded data sets to a person who doesn’t know what it is. Is it now more, less, or the same level of security as the MD5 hash alone?
Say instead of PGP I just XOR it on top of its hash. And then I deliver both encoded data sets to the same person. Is it now more, less, or the same level of security?
This is what I admit I’m not understanding here. I can’t see how either case becomes less secure. The additional encryption is done on the hashes offline, on a PC not networked.
I appreciate the distinction, but my application and situation is not of the type where it should be of worry. My post was primarily about the difference between VB2 and VB3 encryption, which info I got directly from their developer’s forum to confirm. I threw out an aside about my other database project and wasn’t intending to go into the details of it.
The thing is, I’m not trying for military grade (or commercial grade) crypto. I’m trying to simply obfuscate an MD5 hash to make it much harder to do brute-force cracking. To require an extra step or steps. I’m not protecting State secrets, just accounts on a message board which are only MD5’d anyhow to begin with.
I hope that makes more sense to you, what I am doing. Anything I do which is serious is done using PGP disks on PCs on special networks. I don’t rely on anything I cobbled together for that stuff.
No, I suspect that the second thing they teach you is rather the opposite. Namely that security via obscurity doesn’t work. MD5 is a perfectly good and secure algorithm, it’s been looked over by a million eyeballs and all the bugs and kinks have been worked out. It’s doubtful that anything you could do to it could be any more secure.
Una, I think that the idea behind the critical comments of ftg and others is, that by adding another level of cryptography on a good algorithm you may inadvertently limit the possible number of keys. The effect may be that someone can crack the code by brute force with a different key than the original one, which still has the same effect. He would not know the actual key you use, but he’d be able to decrypt the coded message nonetheless.
I tried to think of a good example, but couldn’t, because they seem to involve information loss. It would have to do with adding dependencies on elements of the key. A (too) simplistic analogy would be a kid who locks the door, then turns the key in the reverse direction because he thinks two turns lock it even better. Whether anything like this would happen in your case would depend on the algorithms used and the manner in which they were combined.
FTR from what you mentioned I do not immediately see anything like this happening in your case. So you would be fine. But then, it’s been a while since I studied cryptography.
Una, I think that the idea behind the critical comments of ftg and others is, that by adding another level of cryptography on a good algorithm you may inadvertently limit the possible number of keys. The effect may be that someone can crack the code by brute force with a different key than the original one, which still has the same effect. He would not know the actual key you use, but he’d be able to decrypt the coded message nonetheless.
I tried to think of a good example, but couldn’t, because they seem to involve information loss. It would have to do with adding dependencies on elements of the key. A (too) simplistic analogy would be a kid who locks the door, then turns the key in the reverse direction because he thinks two turns lock it even better. Whether anything like this would happen in your case would depend on the algorithms used and the manner in which they were combined.
FTR from what you mentioned I do not immediately see anything like this happening in your case. So you would be fine. But then, it’s been a while since I studied cryptography.
So, in answer to my thinking points above, is your contention that the PGP’d MD5 hash is no more secure than the MD5 hash alone? Please don’t take offense, I have no pride of ownership in my methods, but want to learn if you have something specific to suggest or demonstrate to me. But with respect again, I feel you’re not addressing my question nor providing any cite to support your contention. I am not saying with authority I am better off doing that, as I have no authority to appeal to on this subject.
My problem is that I’m asking a specific question and I’m not hearing anything but generalities as answers. The concept that “security through obscurity” not working is a very valid one applied to open-source encryption algorithms versus closed-source algorithms, and you’ll hear no argument from me on that point. But I do not feel that it is generally extensible to all situations, especially the ones I outlined above. So I don’t agree with it in this context. In addition, I thought my crypto text said that that saying is only valid for relying solely on security through obscurity. A hidden PGP disk file is no less secure because it is behind a firewall, for example. Or named “1996CorporateChallengeStandings.doc”.
Tusculan, your point on keys is interesting and I see what you are saying. But I confess I don’t understand how that can work operationally-speaking. It seems to me that for that to be true, one of the first things a cryptanalyst would do when presented with a code would be to re-encrypt it. Maybe that’s what the experts do, but I’m unaware of that being the case from any of my crypto references.
I hate continuing this thread hijack here, but I only intended to comment on VB2 vs. VB3 passwords. I wish the posts could be split off into a GQ. I don’t think it’s a GD because this is a situation where there should be a factual answer. If I can get online tonight, how about I post a GQ on this subject and anyone who wants to take the time to educate me as to why my assumption is invalid can do so, and I can learn what to do to make things better. But honestly, I really just can’t find authoritative proof that my assumption is invalid.
A GQ thread seems like a good idea. FWIW, I googled a bit on cascading cryptographic algorithms and got this page, apparently from an academic book: Applied Cryptography, Second Edition: Protocols, Algorithms, and Source Code in C by Bruce Schneier.
This support both sides of the argument. Generally speaking there may be interactions that lessen security. In your specific case, if I understood correctly what you are doing, you are not compromising security.
Well, the reason we have been replying with nothing but generalaties so far is that you haven’t provided any details about your scheme. From the bits that you have let through, I will admit that your scheme at worst makes it no less secure than plain MD5 but there’s no indication that it is making it more secure. IANA Crypto expert so I am not professionally capable of mounting any substantative criticism of your crypto scheme however, it seems that the only way you can weaken crypto security is via a 2 stage encryption/1 stage decryption process where you might inadvertantly reduce the search space for the key. Under a 2 stage encryption/2 stage decryption process, I think there is no problem with making it less secure. But it seems to be that at best, you’ve gone to a whole lot of bother over nothing when there are commercial schemes which do a far better job than what you are doing.
I would also question exactly what data you happen to store in your recipe and wine db that malicious hackers so want to get hold of? MD5 is secure enough for corporation DB’s and mission critical servers that hold many millions of dollars worth of sensitive data yet somehow you imagine that malicious hackers are going to try and probe your recipe DB?
Tusculan I cant think of any examples that pertain to crypto specifically but I remember in a number theory lecture how certain pseudo-random number algorithms could break if you tried to seed it multiple times. That is, seeding the RNG with a number generated by the same RNG multiple times leads it to produce deterministic output.
But I did ask a specific question above regarding MD5 and PGP and XOR (assuming the data interceptor does not know the encryption method).
Simple question, rephrased: I deliver to you two disks. One has an MD5 hash on it. One has an MD5 hash that has also been PGP encrypted. Which disk can one say with the greater level of confidence is more secure? Which one is more likely to be broken with a brute-force cracker?
There’s no need for sarcasm here. Of course I’m not worried about that type of data. That type of data is plaintext. Consider for a minute what else might go into a database, if you were a programmer or DBA. There are other things that do need protecting, such as Member information and Admin and Moderator settings for the DB, passwords, etc. Especially since, as I’m sure you know, people have the habit of re-using passwords at different places. And I figured why not take the MD5 hashes, after they are generated, and do something to make them different so they resist brute-force MD5 cracking? Which isn’t all that hard to do, albeit time-consuming.
So which of the two disks is more secure? Is it even possible to say? As far as I can tell, Tusculan has the answer above, in that it’s “at least as hard to break as any of its component ciphers”.
Well, it depends on your verification algorithm as I said previously. In case your not aware of exactly how MD5 works, it encodes the typed in password using the same MD5 algorithm and then compares it to the hash key, this way, the unencrypted password is never stored anywhere on the server. If you tried verifying the 2 step encryption routine the same way, then you run the risk of reducing the search space due to unintended interactions. However, if you leave them as independant stages, then at worst, you cannot make it less secure.
For example, if you password checking algorithm was:
Get password
Encrypt using MD5
Encrypt using PGP
Check against stored password
IF they are equal, allow access
Then you run the chance of decreasing security since not only do some plain text passwords hash into the same MD5, some MD5 checksums encrypt into the same PGP string. Thus, you multiply your chances of false positives.