There is a bit of a disconnect in the above about CPU as central processing unit, versus the big honking die holding a large fraction of a computer’s implementation of modern systems.
An x386 was implemented with 275,000 transistors. There was nothing but a single pipeline engine running instructions. Everything else was implemented in other chips. Cache was off die, for those systems that actually had cache. Even back then there were discussions about potential back doors or exploits. Exploits that might take advantage of timing issues, for instance, that could lead to incorrect execution of an instruction. Something that could be parleyed into a privilege escalation.
That required the attacker to be running on the machine, albeit in unprivileged mode.
A modern computer is more and more a system on a chip. The majority of computers in our lives are already so. The PC on our desk is in some ways more the dinosaur.
The number of actual fully functional processors inside a modern computer is way more than most realise. Every device is typically internally controlled by processors of significant capacity. A modern OS spends most of its time chatting with devices at a very high level. Something as innocuous as a NIC, likely contains a number of processors, one managing the low level interface, right down to signal processing on the wire, another likely contains a protocol engine for Ethernet, and useful ones a good fraction of TCP/IP. There is space and capacity to place a Trojan of near arbitrary wickedness.
Modern SOCs bring this entire subsystem onto the CPU die. Which brings it under the control of the CPU designer. Although such subsystems may be an older design, just reworked for the lastest design rules.
Risks exist not in the gates doing the work, but in the firmware. One would assume robust integrity checks are employed to prevent the most egregious of malware injection. Modifying on chip firmware is always much easier than trying to modify gate logic, but hopefully easier to detect.
But nothing is perfect. From the point of view of malware, the now defunct Underhanded C Contest, was a fabulous example of how really clever coding could inject bad behaviour into code that was, on the surface, blameless; would pass a code inspection and usually work correctly. Enough to pass a test suite. But still evil enough for nefarious purposes.
Imagine a code integrity check that always produces the correct answer except for when presented with a specially crafted input stream, that input just happens to trigger a flaw resulting from (say) a bad pointer cast leading to dropped a bit, and from there it accidentally outputs a hash value derived from only a tiny fragment of the input stream. Thus allowing crafting of an executable image that will always hash to a desired value, despite its actual contents. This is another example of the difficulty in the chain of trust. Multiple implementations with different tool chains is a start. Sadly a lot of code out there doesn’t manage this.
The XZ Utils back door was an example of a highly sophisticated attack that leveraged a range of techniques from social engineering to unexpected interactions in the OS to potentially delivering a devastating security threat across the planet.
A compromised NIC would be close to a nightmare scenario.