Right, a backdoor is when a programmer (or other person with the authority to do so) implements all the security, but then also deliberately implements a way around the security, usually for that person’s own benefit (licit or illicit). For instance, your username and password will let you access your account information, but the sysadmin might also have a different username and password that will also allow access to the same information.
Douglas Hofstatder’s Gödel, Escher, Bach extrapolates from Gödel’s incompleteness theorem to say that any system of more than primitive complexity is going to have things it (or you) can do (i.e., “true statements”) that can’t be determined or derived from application of its own rules; and that, similarly, any system of sufficient complexity will be capable of allowing the construction of an instruction that will shut it down or destroy it. He uses the example of a very high fidelity record player: it could generate a sound that would destroy the record player if played.
Basically everything is hackable because it can’t be made otherwise. A sufficiently complex toolkit will provide a tool user with the ability to destroy the toolkit or bypass any restrictions / protections / etc that the toolkit comes with.
My views on the relevance of Gödel’s incompleteness theorem to the security topics of this thread are exactly the same as my views on the relevance of the Halting Problem to the security topics of this thread (after all, Gödel’s incompleteness theorem and the Halting Problem are just two different framings of the same phenomenon).
I’d say that a backdoor is illicit by definition. A privileged administrator account isn’t a backdoor; it’s just a normal part of system administration. A secret administrator account, left in by someone for use by a third party or by that person after they’ve left the organization, would be a backdoor.
True enough :). I’d suggest that most of these kinds of proofs have the same “problem”; they imply an observer that cares about the answer and can test equivalency, etc. How much complexity are we allowed to put into the observer? A counter with a halt detector doesn’t seem too onerous, but it is still a computation of sorts.
One could imagine that the counter is put into some part of the machine which is inaccessible by the program running on it (no writing to its internal state, and its program is not enumerated with the other programs). Of course, you can imagine the same thing for Turing’s g(i) composite function. The halting problem proof still fails in that case because g(i) is outside the set of our enumerated programs and thus immune to the diagonalization argument.
Half-joking, but it’s a legitimate approach when doing math and CS proofs. And anyway, 2[sup]2[sup]40[/sup][/sup]+1 is still a lot less than infinity. For what it’s worth, I have written useful programs that needed only a few bytes of state (on machines with only 16 bytes of RAM).
Also note that, just because you know a program will eventually halt, doesn’t mean that it’ll halt quickly, and certainly doesn’t tell you what the output will be when it does.
Microprocessor companies are actually the largest employees of experts in formal methods, and some of the biggest funders for formal methods research, precisely because of the FDIV bug. In fact, the FPU designs of all Intel microprocessors are now proved correct in HOL Light before being manufactured. John Harrisson, the author of HOL Light, works for Intel and heads their formal methods department, for example, and has written extensively about formally proving correct floating point operations. Centaur Technologies, AMD, ARM, Rockwell Collins and Imagination Technologies all engage in similar activities, to differing extents.
Because the systems weren’t designed with proper security (e.g. end-to-end strong encryption) at the beginning and have been improved in this regard only in patchwork bits and pieces. This was partly the result of inertia (The very early stages of what became the Internet were developed in a closed environment where everybody pretty much knew and trusted everybody else.) and partly the result of politics (Governments, particularly the government of the US where most of the breakout from “academic/military niche” to “public Internet” occurred, opposed the development of truly secure networks. They didn’t manage to completely keep a lid on it, but they did mange to prevent proper routine deployment to the point where Grandma’s e-mail is secure even if Grandma thinks “encryption” is the name of a zombie flick.).
Hell, I once was the guy putting 3270 (IBM mainframe terminals) on the desks of clerical workers and handing out user manuals.
More than a few promptly wrote their IDs and passwords on the fly sheet of the manual, which was placed next to the terminal.
The only security with that system was the fact that the entire system came down from 19:00 to 06:00 for batch processing.
60 Minutes did a spot on “Computer Security” where they got access by getting a mole hired to empty the office trash cans. Just pick up the documentation, and there is the ID and password.
Compared to the web’s ability to “hack” 90% of commercial systems, I’d prefer the risk of a system which could be accessed only from a hard-wired terminal during office hours.
For the same reason that all prisons can be escaped from. There are specified ways for people, such as guards to enter and exit. Same with systems. People have to work on them, so there are ways in. Trick people into giving you the keys and turning their heads, and viola!
If you want to make something secure, it needs to have no way in and be throwaway. Like one-time pads in cryptography.
I’m not sure if that’s more or less stupid than the old distribute malware-ridden USB sticks in the parking lot so that people will plug them into their work computer trick.
I think Schneier is a smart guy but that he’s wrong here. AutoRun (now disabled anyway) is only part of the problem. Name your malware natalie_portman_nude.jpg.exe and you’ll still get people clicking on it (that’s why email programs block executables wholesale now, even when embedded in zip files).
That’s not how software signing works. I work for a company that provides software and services for PKI implementations. The root key is generated during a key signing ceremony. The key is generated on a hardware security module (HSM). There is no practical way to extract the key from the HSM. The key to access the HSM is split into several pieces and stored on PIN protected smart cards distributed to some number of trusted individuals. The HSM is turned off, and locked in a safe at a secure site.
The root key signs another key to create what’s typically known as an “Issuing CA (certificate authority)”. The issuing CA creates certificates issued to the servers providing the updates and also (hopefully) certificates for the cars themselves so that mutual authentication can take place. The issuing CA does have to be online, but again the key will be on an HSM that requires several card holders to access and the servers will be in very secure data centres with several layers of physical access controls.