Why is everything hackable?

I don’t know if I’ll get the details straight, but this is a story from “Surely You Must Be Joking” by Richard Feynmann. He was at Los Alamos working on the Manhattan project and to amuse himself he would break into safes. He found that the secretaries all had the combinations and they all wrote them down somewhere near their desks. He snooped, found the combinations, and left joke messages inside the safes for the generals to find.

The generals were outraged and when it was finally revealed how he got in he expected the brass to inform keep the combos more guarded, not let anyone write down the combinations, or other common security measures. Nope, the secretaries were instructed (or on their own decided) to keep Feynmann from coming into their offices.

This is why you can’t make things unhackable; human nature.

All of the above, but there is one other thing that was only hinted at. My (unfortunately late) brother was a systems analyst. He hated C for one reason: lack of bounds check. That means that you may reserve, say, 256 bytes to hold some user input. If user enters 512 bytes and you didn’t bother to check that is was not over 256, the machine will just write the last 256 bytes over whatever followed the buffer, with unpredictable results. When my brother had to program in C he would write his own memory allocation code that included a bounds check.

Why didn’t the compiler build this in? My conjecture: when C was created programmers had to save both time and space. Much less of a consideration today. Certainly not compared to security. The could, at least, have made it optional. C is used all over Microsoft. Technically C++ or some other variation, but bounds checking is still not built in.

It can be proved that no program can be proved absolutely secure. Unless you can’t use it at all.

Britain is trying to figure out how to get into encrypted messaging apps, and might go so far as to ban anything that won’t build them a back door. Cite.
Jeez. Seems like the UK never met a “Big Brother” concept that it didn’t just embrace, but rather began to French kiss passionately.

I suppose you’re referring to the Halting Problem (or Rice’s Theorem more generally, or some such thing), but I wouldn’t use this phrasing… Surely some programs can be proven absolutely secure in some reasonable proof systems. E.g., I can confidently state that a program whose code consists solely of “Output ‘Hello world!’; Exit;”, run in the standard context, ignores all input, outputs “Hello World!”, and then exits, without doing anything malicious.

The conclusion is correct, the details are wrong.
What Feynman found was the the combination locks were only accurate to about 10 places (if the combination had the number 15, any number between 10 and 20 would work). So the theoretical 100x100x100 = 1 Million combinations were actually only about 10x10x10 = 1,000. Worse, people wouldn’t spin the lock when they closed the cabinet, which meant that the last number was “free,” resulting in only about 100 numbers he needed to try. So, he would fiddle with the lock, and get the first two digits, and then he could open the lock. So, instead of sending out a memo instructing everyone to spin the lock (which would have made picking it 10x as hard), they just issued a directive to keep him away from the safes…

Agree. I didn’t mean to suggest that it was a bug in SQL itself or any implementation of SQL. It’s a bug introduced by user interface programmers who fail to firewall the back end logic from the unstructured input of human users.

IIRC, Feynman also found that, when the safe was opened, you could look at, or even just feel, the inside portion of the lock, and determine the first two numbers of the combination from that. He also developed a technique for entering the combinations that allowed for rapid sequential testing: Instead of testing, say, “5-5-5 5-5-10 5-5-15”, and so on, you could test “5-5-(5,10,15,…)”. And of course he also was pretty good at guessing what sorts of numbers people would select as combinations: Significant dates, or physical or mathematical constants for the scientists, and so on.

He also told of how once, a professional locksmith was hired to get into the safe of one of the high brass. He asked how the professional was able to do it so quickly, without any of Feynman’s prep work. “Oh, that’s easy. The factory default for these safes is 0-50-0, and a lot of folks never change it, so I tried that first.”

I’d guess you’re vaguely remembering the export restrictions the US used to impose on encryption. I think at one point, anything over 40-bit encryption was illegal to export. (And there were similar restrictions on hardware components.) Companies could and did develop encryption software that was better, but it meant the could only sell within the US, reducing their potential market. (I was using 1024-bit PGP encryption in the early 90s.)

Among the reasons the US justified restricting the technology was to keep things easy for our agents to spy on foreign powers. Of course what actually happened was foreign companies had a market advantage over American ones. Eventually the US government gave up that battle and relaxed the restrictions.

The bit size of encryption doesn’t need to scale like that. Each additional bit of an encryption key doubles the length of time a naive brute-force attack needs to break it. It doesn’t take that many bits to make brute-force attacks completely impossible.

The real problems that encryption algorithms face are potential inherent mathematical weaknesses that lets an attacker gain information on multiple key bits at a time. That’s why the NSA has many brilliant mathematicians in its employ. (Of course, less esoteric weaknesses like implementation flaws and social engineering are more immediate problems.)

A lot of security stuff is also antithetical to performance or other concerns as well. A common programming idiom to search for something:

[noparse](In C:)[/noparse]



#include <stdbool.h>

bool exists(int num, int a [], int a_len) {
  for (i = 0; i < a_len; i++) {
    if (num == i) {
      return true;
    } 
  }
  return false;
}


If we find the number, escape early and save some computation. Simple, right?

This is vulnerable to a timing attack. Basically, if you return early, it took less time to execute the function. A smart person can exploit that and test if secret things exist by seeing how long it takes on average for a program to return given certain queries.

I understand that there have been known issues with compiler optimizations too. I understand that for C, it’s standard to compile security libraries with -O0 (no optimization) so that the code executes exactly as written.

Another issue is some really, really perfect-storm weirdness. For instance, the secure random number generator on Linux, /dev/rand, populates itself based on hardware noise. In practice this is basically truly random, but someone found that it’s deterministic under some cases when the file is empty and the network has just rebooted.

Thanks for the correction, yeah I messed up the details pretty bad. :smack:

Yeah, I was trying to explain to my mother-in-law once that there wasn’t any need to turn the computer off when not in use to prevent hackers from stealing her personal info. I had 3 (ultimately unsuccessful) points:

  1. Nobody knows who the f**k you are, and even if you do have a few bucks laying around, that’s not common knowledge, and certainly not common enough for someone to hack you.

  2. You have a firewall on your router, and security features on your computer. These are set up correctly and will almost certainly take you out of the “low hanging fruit” category that would get you incorporated into a botnet or something, and if nobody knows who you are, they’re not going to dig any deeper.

  3. (the most obvious one to me) If someone does decide they want to rob you of your bank info, the easiest way is to kick in the big glass windows on the side of the front door, walk in, and steal the entire damn computer, or go fishing through your file cabinets, where you keep EVERYTHING you ever got, including whataburger receipts from 1988. This is more certain and less difficult than trying to break into a well-defended PC via a DSL connection.

Well, his screen name is vomit_comet, after all, and that’s exactly what we got in this thread.

Do we know that they didn’t do both?

This is patently false.

The first major theorem in computer science was the Halting Problem: You cannot write a program that verifies if another program halts on a given input. If you can’t tell if it goes into an infinite loop, you certainly can’t tell if it has other faults which can lead to exploits.

The key thing to remember is that the effort to check correctness of a program grows incredibly fast in terms of the length of the program. Stacks and stacks of exponentials doesn’t even come close to describing it.

Machines can’t do it, humans can do some things well, but not all things and the complexity issue is always going to be there. So there’s an intrinsic problem there.

And that’s simple input, start, stop, output programs. Ones that run in parallel, continuously do something, etc. just make it a whole lot worse.

Very, very few programmers are actually good at sequential programming. Throw in concurrency and the percentage drops to virtually nothing.

Well, it was Feynman’s story, and that’s what he said.
We may never know what really happened.

This is the key insight. If you don’t allow users to access your system, then the system is perfectly secure. This is why hackers don’t hack the computers in today’s cars, because there’s no way to access the system. Isolated embedded systems are very hard or impossible to hack. But the problem comes when you want to be able to communicate with those systems. If an authorized user can input information into the system, then the system is hackable because no system to differentiate between authorized and unauthorized users can be perfect.

Yes, it’s nice to be able to easily update the firmware on your car’s engine. Except how does the computer engine tell the difference between an authorized user and a hacker? Whatever method you use can be circumvented. Even with today’s embedded systems if you can pop open the hood and pull out circuits and replace them with new stuff then you can do whatever you like. The reason nobody steals cars, pops open the hood, fiddles with the electronics and then returns the car to the owner is that there’s absolutely no sane reason to do that. People steal cars for rational reasons, and it’s a lot of effort to fiddle with a car’s electronics. But if it was as easy as driving down the street and wirelessly uploading your “Carmageddon” virus to every car you pass, then a hacker might do it just to be a dick.

Feynman also knew his audience. IIRC, a lot of the combinations were something like 31-41-59 or 27-18-28.

That is why I said “in practice”. A vast majority of systems are unhackable for all practical purposes. For example, public key encryption is, in theory, hackable–you just need enough time to try all the combinations. Very rarely does it happen.

But just because public key encryption can’t be solved by brute force computing that doesn’t mean that your encrypted emails can’t be hacked. The obvious solution is to install a keylogger on the sender’s computer, or the recpients computer, or to put a video camera looking over your shoulder, or to take you into a dark room and beat you with rubber hoses until you tell them your password. In practice the encryption part of the encryption scheme is unhackable, but the encryption is just one part of sending a message from one party to another. Just knowing the timestamps of all the times UserA sent UserB an encrypted message is potentially valuable.

The public key was just an example of something that is hackable in theory but in practice isn’t. You’ve taken it further by introducing key loggers. In theory a key logger would “crack” email protected by cryptography but how easy is it to surreptitiously install a key logger? In a vast majority of the cases it’s really hard to the point of impracticability.

I guess what I’m trying to say is that in theory everything is hackable (in part because you can’t prove that they’re not). In practice most systems are not.

This is one of those statements that is literally true but absolutely false when applied to reality.

Others have already gone over the specifics, but in essence, stuff is broken due to human error (whether social engineering, poorly written code, or whatever). Codes are not cracked because there are only a finite number of combinations.

For instance, AES is a common code and 256 bit keys are a common choice. No one knows how to brute-force AES faster than just trying every combination, and so cracking it would require 2[sup]255[/sup] attempts on average. This a finite number and so can be broken in theory. But it never has since it would take the age of the universe to finish. And the code is convenient because the keys are short and (relatively) easy to exchange securely.

We can contrast this with One-Time-Pads. OTPs have a key length equal to the message length and are absolutely, totally, mathematically secure. They can never be broken no matter how much computer power you throw at it. Doesn’t this sound better than AES, since AES can still be theoretically broken?

The answer is no, because for OTPs to be secure, you must distribute the keys in advance. They must also never, ever be reused, and be totally destroyed on use. They must be generated via a totally random process. Violate any of these constraints and the system is broken. It’s so easy to violate the constraints that OTPs are almost never used in practice because the real-world security is so low.

In short, it’s a mistake to look at the finiteness of your codes as a measure of their security. The human element is vastly more important, and often conflicts with the mathematical strength of the codes.