Microcode and FPGAs

Microcode and FPGAs both imply a changeable microprocessor and instruction set. When would I use one over the other? What are the differences in how things are designed using one or the other?

I dabbled in a little Verilog (hardware description language for FPGAs) for a class in college, and I understand that microcode uses some sort of “microassembly” language instead. I suppose this language difference is bigger than the differences between, say, Python and Perl? What other differences are there in practice? And how does changing the logic of these devices “in the field” differ from designing “once and for all” an ASIC or CPU at Intel or somewhere?

I should also ask: Where is microcode used these days? Wikipedia makes it sound like it is still widely used, but the only place I’ve heard of it was in Tracy Kidder’s “The Soul of a New Machine”, from 1980.

In a bunch of recent Intel processors, for one thing. (The biggest component is in fancier floating point operations.)

(The link points to a standalone microcode update to run under Linux. Some BIOS upgrades also contain microcode updates. Many processors have not needed updates and are not listed.)

For big, modern CPUs, the common instructions are hardwired for speed. Earlier CPUs were just too weak to handle instructions that were “complex” at the time but are now easily doable given the size chips can be now. But for low end CPUs, it’s still quite practical.

Former microprogrammer here. Yes, Intel machines use microcode (at least when I worked there.) I suppose AMD machines do also. Sparcs do not. I don’t know if IBM processors do or not - microcode used to be very prevalent within IBM. Something like microcode is used in some processors for very specific purposes, like programming Built-in self test for on-chip memories.

I’m not exactly sure what the OP is requesting. There are companies which sell configurable embedded processors, like Tensilica. They come with their own design tools. I don’t know if they are microcode driven, but I’ve never heard of any indications they are.

For those who might not be familiar with these concepts, FPGAs consist of a memory which holds interconnection information between standard blocks of logic. Reloading the memory changes the connections and thus the function implemented. FPGAs today have more complex standard parts also. Microprogrammed machines have a hard wired and relatively simple microinstruction set. You create a higher level instruction set by writing sequences of microinstructions to implement it.

Any supported microprogrammable machine would probably come with a microassembler, but they are easy to write in any case since you have fewer instructions and don’t have to deal with things like linking. I’ve written a few myself with more primitive tools. There were a few microcode compilers at one time. I did one for my dissertation, in 1980, that used object oriented techniques to allow porting of the object code, and someone did a C-like language that was table driven, and even had a company selling it. But that was long ago and the old ACM/IEEE Microprogramming workshop got reborn as a microarchitecture conference. When I graduated I successfully predicted that the field was dying and got into something else, though I still dabbled a bit for a while.

If the OP could give more specifics on what he wants to do, that might help.

Remember they were building a minicomputer in the pre-microprocessor days. A lot of them were microprogrammed - I did some research on one from Lockheed of all people.
Loved that book. It was very exciting to see my obscure specialty on the front page of the NY Times Book Review!

You seem to have a fuzzy understanding of what microcode is.

All you need to make a very primitive processor is a couple of registers and some control logic. You can implement your control logic using a simple ROM and a register to hold the “state” of the machine. In this sense, a processor is just a glorified state machine.

Taking a very simple example, most processors go through four basic states, called instruction fetch, instruction decode, execute, and write back. Instruction fetch simply fetches the next instruction from memory. Decode fetches all of the necessary data for the instruction, so if the instruction is “add A and B together” it goes out to memory and grabs the values of A and B. Execute is where the data gets fed through the ALU, and write back stores the result of the ALU somewhere.

To create a four-state machine using a ROM and a register, your ROM would have the following values:

1
2
3
0

When it starts with an address of 0, the output of the ROM is 1, which gets latched into the state register on the next clock cycle, changing the state to 1. This state is then used as the next address into the ROM, and the value at address 1 is 2. So on the next clock cycle, the state changes to 2, the output of the ROM changes to 3, and on the next clock the value 3 is latched into the state. At this point, the output of the ROM is 0, so on the next clock it goes back to the initial state.

This is a bit of a stupid state machine, since it basically emulates a simple counter circuit, but the ROM code is our “microcode”, and if we want to change the way the state machine operates all we need to do is reprogram the ROM.

In a real processor, the next “state” would be determined by not only by the output of the ROM but also possibly by other external signals such as the value in the instruction register. Back in the old days, dedicated hardware for multiply and divide was too expensive, so when one of those instructions was being executed, it wasn’t done in a single pass through the ALU. Instead, multiplies were done using cycles of adds and shifts (look up Booth’s algorithm) and divides were similarly done using subracts and shifts. This makes the state machines much more complicated, and the microcode gets similarly much more complex.

Around the time of the 386 CPU, most processors ditched the microcode model and started using RISC style pipelines, in essence “unrolling the loop” (as they call it). Instead of using just a couple of registers and a microcode ROM to cycle through states, each state was given its own separate piece of hardware. While one instruction was being fetched, another was being decoded, a third was being executed, and a fourth was being written back (actual processor pipelines are often much more complex than this, but you get the idea). Microcode mostly went away, but is still used in some cases, as was mentioned by previous posters.

An FPGA is completely different from microcode. With microcode you typically have fixed hardware with a programmable ROM to control it. In an FPGA, you have programmable interconnects between the gates. When you program an FPGA it kinda looks like it has an instruction set, especially if you are programming it using VHDL, but you are really programming connections, not a ROM (although your “program” will typically end up being stored in a ROM).

If you wanted to be really tricky about it, you could program a processor into an FPGA with microcode in a ROM inside the FPGA. Then you could have microcode inside VHDL code.

I don’t really want to do anything, except satisfy my curiosity.

It sounds like microcode is used to optimize how instructions are executed in general purpose processors, while FPGAs are used to make something like an ASIC, only reprogrammable. Am I right?

FPGAs offer more flexibility, correct? Microprogramming is a way to take a pre-built processor and change the way it executes instructions, while an FPGA is basically a blank slate, with which I can implement anything from a digital clock to a helicopter control system?

I guess what I’m asking is what is the difference between the two? Both seem like ways to modify hardware using code, but I’m not sure why someone would choose one over the other.

This is very true, and the reason I started this thread.

All I knew about microprogramming was that it was a way of “programming the processor” at a low level, which to my mind sounded a lot like an FPGA. You and Voyager have been very helpful in explaining what microcode is, and I’m grateful.

The way I understood it is that microcode basically determines how an instruction is decoded and executed; many instructions require multiple steps to be executed; at the least, you first fetch the instruction, then decode it, then execute it and finally write back results (this is called pipelining, with some CPUs having as many as 20 stages; each instruction takes 20 clock cycles to execute but instructions can be fed into the pipeline on each clock cycle, so it effectively runs at the clock speed); each instruction requires that different registers and logic blocks be selected (e.g. read register A and register B and add results and store in register C).

You are correct in your view of FPGAs. And actually, the issue of one versus the other never came up, since they didn’t overlap much in time. Microprogramming, which was invented in a paper by Maurice Wilkes in 1951, was used in the '60s and '70s. FPGAs did not come into much use until the late 1980s. Before them there were Programmable Logic Devices (PLDs) which used fuses and were much less flexible. They were used for glue logic on boards.

You really got it when you said FPGAs are used as substitutes for ASICs. They are move expensive, and slower, but they are a lot cheaper to design and don’t have expensive mask costs. Microcode is designed to implement instruction sets, though you can do other things with it.
For the most part microprogramming was only done by computer designers. Burroughs had a line of user-microprogrammable machine (the D-machine and others) in the early 1970s with the hope that they could sell it to people wishing to optimize certain applications. I don’t think they sold many of them.

BTW, here is a nice history of microprogramming.

The real reason microprogramming died was that it became much easier to design with gates only. It was always recognized that hardware was faster. The more expensive and faster machines in the IBM 360 series were gates only, the cheaper low end versions were microprogrammed.
The 386 was not the end of microprogramming at Intel. The Itanium was microprogrammed, and in fact the x86 instruction set was implemented by generating micro-ops for them and sending them off to the main engine.

Pipelijnes and other ways of speeding up instruction execution far predate the 386, of course.
There is also a relationship between processor design styles and microprogramming. IBM, where the first RISC machines were designed, was a hotbed of microprogramming, due to its success in the 360 line. Dave Patterson did his dissertation on microprogramming. VLIW does also. Josh Fisher did his dissertation on compacting microcode across basic blocks, which build on work we did on compacting microcode inside basic blocks. VLIW machines are kind of like horizontal microcode in a sense. RISC shares some of the characteristics of vertically microprogrammed machines also.

(Aside for non-architects. Horizontal machines have very wide microcode words, each of which consists of a set of more or less independent microoperations. One mop might select a register, another might select an arithmetic instruction, yet another might check condition codes. Vertical machines had a few instructions, and look more like traditional machine language instructions. Compaction is taking a stream of microoperations and figuring out how to fit them in a minimal number of words while respecting data and resource dependencies. It is a very fun NP-hard scheduling problem. )

This is actually done in x86 CPUs as well, in order to more efficiently process complex instructions (CISC) with a RISC core:

That is my understand as well but I didn’t work on X86 processors - just fought for resources with the people who did. :slight_smile: