For example, an Atmega 328 (like the one Arduino uses) vs. an Intel 80286. I picked the 286 because they both run at 20MHz.
They’re designed for very different applications.
The 286 is a CPU, and requires external RAM and ROM. The '328 is an embedded controller, and could stand-alone in the right application. As far as speed goes, the '286 is a 16-bit CPU, so it has an edge there, but it’s efficiency isn’t very good, so the '328 would most likely beat it in any benchmark that would run in it’s available memory space.
The ATMega328 is a bit of a brain dead microcontroller by today’s standards. It’s a RISC pipeline type of architecture which means it will run circles around the 286’s more CISC-ish style architecture with respect to integer math. On the other hand, it’s also a tiny 8 bit integer processor. Like most microcontrollers, it will be a lot faster than a typical 286 in a controller-type application but the microcontroller tends to perform poorly in more general purpose computer type applications.
Something like a 16 bit PIC would probably be a better comparison since it natively handles 16 bit integers. Again, it will run circles around a 286 for integer math, but its limited memory space, lack of more complex built-in functions, and the lack of an available floating point processor will severely limit its general purpose functionality.
RISC was just advised at being more efficient in terms of transistors and hence power , for each level of pipelining. Modern 80x86 cpu’s are pipelined, at the expense of huge power and transistor costs.
The ATMega328 is rated at 1 MIPS per MHZ, basically one instruction per cycle. Its fully pipelined… when doing 8 bit maths.
It will go really slow at 32 bit maths, its not a signal processing chip !
“RISC” was a movement to make the compiler figure out the complexities of the instruction stream instead of wasting transistors on it. Intel (and HP) tried to do that again with Itanium which wasn’t RISC but tried to follow the same principle (let the compiler worry about the hard stuff.) It’s pretty telling that most of the classical “RISC” architectures are dead. Or, at least pining for the fjords. Alpha is dead, SPARC might as well be dead, MIPS can kiss off, nobody but Nintendo cares about PowerPC. ARM soldiers on not because it’s RISC but because its designers know how to optimize their cores to do the jobs they need to do.
“CISC” won (meaning x86 won) not because it was clean or elegant, but because the winning implementation (x86) beat the shit out of everything else. Because it “wasted” transistors on optimizing the instruction stream at runtime instead of compile time.
I wish I understood this stuff
If you have specific questions, ask away.
“pining for the fjords” is a reference to a sketch from Monty Python’s Flying Circus.
Hope this helps