How hard would it be for a chip designer to build in some kind of backdoor when designing a modern CPU that would enable someone who knew how it worked to gain secret remote access to the machine? If it can be done, how hard would it be for a sophisticated company or country to detect this? I guess last question is if it can be done, can anyone speculate on how it would work?
That’s fairly standard in modern CPUs. It’s usually called something like a “management engine”. A pretty typical example is the Intel AMT (Active Management Technology). It’s designed to give the owner of the computer remote access management to their computer, but it can theoretically be exploited by anyone if there is a security flaw that allows it (and such things have happened). AMD has a similar system called AMD Secure Technology.
Here is a Wikipedia article about the Intel Management Engine that has a pretty good explanation of the technology, as well as concerns that people have about it (including accusations from some groups that it is indeed a “backdoor” as you described in the OP) and security issues that have come up in the past.
Very interesting read, but if I’m understanding, it’s a different chip on the motherboard and not integrated into the CPU. Is that right? If so, could it be hidden inside the CPU?
I’m really asking about a hypothetical where Intel builds a secret backdoor known only to them or their three-letter agency masters.
Of course. Here’s an article on how to do it.
The question is never about whether it’s possible, it’s about whether it has already been done and if so, what is the nature of the backdoor. It’s especially common these days to see China and the US throw accusations back and forth at each other.
A backdoor can be put into almost any component. Here is a real one, not a theoretical one, found in just an RFID key card, which is one of the simplest components you can have.
No. It’s integrated into the same die as the CPU. A modern CPU contains a huge amount of logic. The actual main CPUs on account for a fraction of the real estate on the die. Most is cache memory, but a modern CPU contains a large fraction of stuff that would have been motherboard and daughter boards in the past. Quite capable graphics processors being another. There is a lot of space to add useful functionality.
Transistor count is in the hundreds of billions. The x386 only needed 270,000. Which is mind boggling.
It’s certainly possible to slip in security backdoors that can be extremely difficult to detect. For a software example instead of hardware, there was a back door in many generations of Unix placed there by computer pioneer Ken Thompson, that allowed him root access to any such compromised machine, but the backdoor didn’t show up at all in the source code for Unix. Not that it was hidden very well; it wasn’t there at all. How’d he do it? By putting it in the compiler that compiles the source code into machine code. But it wasn’t in the source code for the compiler, either, or at least, not in any published source code for the compiler.
What he did was, first, he put a bit of code in the compiler that would recognize when it was compiling an OS kernel, and insert the backdoor. And then he put in another bit of code that would recognize when the compiler was compiling another compiler, and that would then insert both of those pieces of code. And then he made a new, clean version of the compiler source code, and compiled that, and that was what he distributed.
Even if someone wrote an entire new compiler from scratch, and used an existing compiler to compile it, and then used their brand-new compiler to compile itself (a fairly standard procedure, when creating new compilers), it’d still inherit the backdoor. And the compromised compiler written by Thompson was the ancestor to a very large number of other compilers, so a large fraction of all systems in the world ended up infected with the backdoor.
Well he delivered a paper pointing out how this could be done. It was also implemented. However it was never actually released into the wild. Although there are various embellishments that claim it was.
At the Southern California Linux Expo in March 2023, Ken said:
I assume you’re talking about some paper I wrote a long time ago. No, I have no backdoor. That was very carefully controlled, because there were some spectacular fumbles before that. I got it released, or I got somebody to steal it from me, in a very controlled sense, and then tracked whether they found it or not. And they didn’t. But they broke it, because of some technical effect, but they didn’t find out what it was and then track it. So it never got out, if that’s what you’re talking about. I hate to say this in front of a big audience, but the one question I’ve been waiting for since I wrote that paper is “you got the code?” Never been asked. I still have the code.
Taken from the wonderful and excruciatingly comprehensive treatment here:
An obvious key point is how the software recognizes compilers and operating systems…
Part of the Bell Labs performance review packet in the early ‘90s was a warning from Ken Thompson that you should be careful about storing sensitive information for this very reason.
As for the OP’s question, there are many ways. Besides the one mentioned, every microprocessor has a test interface, the control of which has to have security because otherwise it is possible to reset registers inside the processor.
There is a lot of concern about “Trojans” which are snippets of logic inserted by an untrustworthy fab which would allow someone to take control of a chip. There is a lot of research on Trojan detection by various means. I’ve seen and reviewed many papers and special issue proposals on this and none have given any examples of it happening. The fab does not have a netlist of the design, so inserting a Trojan without breaking the chip and/or delaying it would be very difficult in practice. But it is a good reason not to send your chips to fabs owned by hostile governments.
At the board level inserting a chip that gives access to the motherboard is much easier, and I think that is how the example mentioned was done.
The chips we made went into our high reliability servers, and had chip ids. The systems collected lots of data on performance. I could go into a database and tell when the board was power cycled. With information from a Solaris command I could get the id and I could look up when the chip was made, where it was on the wafer, and even see the history of its neighbors. Intel was going to introduce this for consumer processors and people freaked. I don’t know if it later happened.
I’m also confused as to how it would work. Like a back door in an operating system makes sense, there a thousand and one ways you could leak useful user data if your OS software is compromised. Similarly for a compiler, if that’s compromised all bets are off, no software is safe.
But a CPU just executes a list of of instructions, to read and write memory locations, a CPU has no idea if it’s computing your private encryption key, or doing your maths homework. What would a back door even mean? There are no “doors” just instructions in and memory read/writes out (and as the manufacturer you don’t get to control the instructions that get run by the people who buy your CPU). It I guess could be combined with some kind of hack to run compromised software, but if your able to run compressed software you already have access to all the memory, so why do you need the hacked hardware?
I guess there are security features built into the CPU nowadays (where certain memory locations can only be accessed at certain security levels, of something like that if I understand it ?) So those could be compromised (e.g. if 0xDEADBEEF is written to location 0xCAFEBABE allow all subsequent instructions to access any memory regardless of security level). But you’d still need a way of running compromised code on a machine with compromised hardware.
That’s simple: Hardwire malicious code into the processor itself. Processors do, as a matter of course, contain some code, and something like, say, a keylogger would be small enough to be easy to hide on something as big as a processor, while still being able to do tremendous damage.
How would that work? The knowledge that this bit of code is reading keystrokes is not part of the CPU it’s part of the OS. And where would it send the keystrokes even if it could detect that? The CPU has no access to the network or anything like that, it could write to some arbitrary memory location but you would still need some compromised software to send them anywhere
So include code that when the user presses a certain combination of keys, it displays the contents of the secret buffer. Or hide code for TCP/IP in there, and write the keypresses to some other computer.
Zhen Lu Electronics has manufactured the CPU in your computer. The totally completely unrelated company Zhen Lu Games has just released an awesome new game. Now available at a super low price! Download it and try it out!
Display how? The display system is not part of the CPU that’s part of the OS
Or hide code for TCP/IP in there, and write the keypresses to some other computer
You could encode the TCPIP algorithm but how would that help? Without a network port to send it to?
Obviously it’s possible to do both of those things, because the operating system does both. It’s possible, in fact, for even an extremely simple OS to do both, as evidenced by the fact that both were done way back in the days of DOS. So your malicious code just needs to include the same code that the OS would use to do those things.
“Read and write memory locations” would be one of the important abilities of a hardware backdoor. Scan system memory for passwords, encryption keys, and other sensitive information. Shoot a few udp packets over the existing network interface, and all of the sudden you have a major breach.
Those are all things that the existing management engines on Intel and AMD cpus can do already, so this isn’t a case of magical new abilities. The most difficult part is getting the components somehow included in trusted CPUs, or convincing companies to base systems around your new CPU.
A few years back Bloomberg broke a story about backdoor chips installed in Supermicro motherboards. This was heavily denied by industry and government, but Bloomberg has never retracted it, and stands behind it. As far as I know no evidence of these backdoor chips was ever shown, so it becomes a question of your trust and the reputation of the various players.
One easy thing to do is to bring a system down. If your chips are installed in systems of your adversary, you can wreak havoc by making everything crash. If the chip is on the cloud, you could run software on the cloud that checks to see if the CPU it is running on is compromised, and take it over if it is.
Possible, but the ROM on chip holding this code is not going to have a lot of empty space in it, so it is unclear that bad code would fit. Expanding the ROM is going to be pretty obvious.
Now, a similar and more likely hack is to reprogram FPGA (Field Programmable Gate Arrays) on a board. That changes the logic of the chip to whatever you want. That’s why they make the programming interface secure.
I don’t have any information on whether this really happened, but it is much more plausible than people hacking on-chip hardware. I’d hire anyone who could pull that off.
So I can see that for a really simple OS like DOS. But the code to manage network connections in a modern OS runs to many many thousands of lines, far more than you could encode in hardware.
Though you could I guess quite simply encode a regular expression or something similar that would fairly reliably detect the code that is executed when say Windows 11 (or IOS, or Android) does things you are interested in, such as reads keystrokes from the keyboard device, produces a SHA private keys, or connects to a network port. You don’t need to know more than that, just strip out the bits you are interested in to store, then if needed duplicate the same code (to say send the stuff you just read to a network location)
It’s easy, if you’re starting with full access to everything. The hypothetical attacker here isn’t some random schmo in his basement. It’s Intel, or AMD, or the like: The people who are designing the actual CPUs to begin with. If you need more room in the ROM than the chip officially has, you just put more ROM on the chip. The few megs that you’d need would be negligible in size.