I owned yield reporting for a major microprocessor before I retired.
And production lines don’t improve by themselves. You can use scan to help diagnose the location of faults which are the highest runners, as indicated by which vectors fail. That can point to process changes to reduced the defect incidence at that point.
We built systems, so we did that for field returns also. That gets a bit trickier, I have a few papers in that area. Bad enough yield problems can lead to mask changes.
All this gets done during bring-up also, which is where I spent most of my time.
And of course large on-chip memories aren’t expected to be perfect. They pretty much all have BIST (built-in self test) to find failing cells, and redundant rows and/or columns which get swapped in after test to repair the memory. Without this there would be a significant yield hit.
You don’t need to do this for register files, you definitely have to do it for caches.
I learned to code in assembly from that exact same book on my ZX81. I typed in the entire Life program and Draughts program by hand using the machine code because 14 year old me could not afford an assembler.
I also wrote a bunch of my own programs and “assembled” them by hand using the table at the back of the book.
I haven’t seen that book since 1982. Now I am going to waste an evening reading it again. /sigh
I just finished re reading ’ The Soul of a New Machine ’ by Tracy Kidser.
It is from 1980 and is about a team of engineers at Data General building a new Eclipse computer.
It really gets into the details , in a pretty understandable way about the building up of the hardware , how the microcode interacts with that and everything up to the software.
Its obviously quite old but gives a good insight as to what is going on down at the lower levels in a computer .it won the 82 Pulitzer prize, and also covers some interesting management techniques.
Tracy Kidder.
Yes, absolutely a fantastic book. I’ve read it at least twice myself. Personally, I’d have been happier if it had been about the development of a new machine at DEC, particularly the VAX. Data General had a bit of a shady reputation, perhaps coloured in part by my affiliation with DEC, but it was nevertheless true that when Ed de Castro left DEC to start Data General, he took a great deal of proprietary information with him, if only in his brain, but rumoured also to have taken design documents. I suspect the reason Kidder wrote about DG and the Eclipse was that that was where he could get the most insightful information; DEC at that time was a dominant force in the minicomputer industry and extremely protective of their proprietary information.
Anyway, yes, a fantastic book. I highly recommend it. It won both the National Book Award and the Pulitzer Prize.
Yeah I guess Data Generals bad boy pirate image and likelyhood of seeing a less corporate approach was what attracted Kidder to shadow them for the development.
Would it be true to say what is going on in a modern Intel chip or an AIM processor has not changed that much from what was happening at the hardwear/microcode interface back then, just a lot more of it and a lot faster?
You can’t imagine the thrill I got from seeing microprogramming on the front page of the NY Times Book Review when I was in the middle of working on my dissertation.
I ran a panel on how people microprogrammed in the real world at a few Microprogramming Workshop, and I invited one of these guys on it at the workshop held on Cape Cod. A few of them came. They were the ones looking under the caps of Coke bottles for the prizes. (Me, too.)
Kidder didn’t really understand the technology, but he understood the people. Great book.
I assume you mean ARM processor, which isn’t microcoded. I’m not sure if x86 chips are microprogrammed any more, but SPARCs weren’t.
And remember they were building a system, not a chip, and had to deal with nasty board design issues. Today there is a lot more Electronic Design Automation (EDA) than there was back then. And a lot more verification. And even microprocessor designers write at the RTL level and use logic synthesis, none of which was available back then.
Back in 1982 you could print the netlist for an entire chip - even entire board - without using up a box of paper. Not any longer. People didn’t verify through simulation back then, that’s the way you do it today.
Thanks Voyager.
Yes, the big difference between ‘revisions’ of the current chips is microcode version number: I don’t know if there is any other difference.
What goes into microcode changes between one generation and the next: the big difference between the 80186 and 8086 was that a heap of microcode was moved into silicon. It also must differ between different models of the same generation but I don’t know if the user-asm-machine-code implementations differ, or only the parallel stuff like the internally implemented system management operating system.
Moving microcode into silicon (actually, designing what the microcode implements in hardware) is a standard way of increasing speed at the cost of design complexity and area. But that’s what Moore’s Law is for. As I mentioned above, System 360 did that also.
However, microcode revisions are a lot cheaper than new tape outs!
Just what counts as microcode might be up for debate in an x86. The Netburst architecture was known to translate x86 opcodes into an internal RISC instruction stream (where it was rumoured the architecture was very similar to MIPS) and implemented with a very deep pipeline. That RISC core may not have been microcoded in the usual meaning. But the opcode translation system could be argued to fulfil a similar role. But it arguably isn’t microcode.
Later implementations moved away from the deep pipelined RISC core, and moved to what is probably a more conventionally microcoded system. I sort of lost interest a long time ago. The intricacies of managing a modern x86 core make one’s head spin. Speculative execution for a start, let along doing so with multiple instructions in flight.
Although I’ve only been in professional dev / IT intermittently since high school, I’ve long considered myself a bit of a graybeard in the industry. Did my share of pro assembly programming on 370s, various minis, and a smidgen of early PCs. Debugged a lot of core dumps, where “machine language” was all there was.
We have some real graybeards here who were 20 years into it while I was in college in the 70s. What a fun walk this has been through the Olden Earlye Dayes. As well as talking of some of the latest stuff. Thanks all.
Does anyone remember Transmeta - the big sell was a CPU that had software microcode, so it could act as (and even switch) architecture (x86 and PowerPC).
In the end they ended up with low power and low performance x86 CPUs that didn’t really compete, even with luminaries such as Linus Torvalds on board as employees.
I suspect that his early lack of enthusiasm for ARM servers was conditioned by his experience with Transmeta – that other architecture that was going to compete with 80x86 in the market dominated by 80x86.
Maybe that was a part of it, but I thought it was also due to the complete lack of standardization in the ARM SoC market - x86 systems follow a standard design so it’s easier to maintain, whereas every ARM manufacturer did things differently, and in many cases contributions were limited.
Well, sure. IBM operating systems like OS/360 (nowadays z/OS) and VM (z/VM) and DOS/VS (Z/VSE) and ACP (Z/TPF) are all written in assembler. That’s a lot more serious than Lotus 1-2-3. And they’re all still alive and you used them today if you did anything with a credit card, insurance, airline, etc.
Hmm… I’ll have to dig out my copy of “Soul of a New Machine” and read it again. I used to read a book every few days, but now I find myself reading things like this board and Reddit, and then realizing half the day (or night) is gone, and an actual book ends up taking weeks in fits and starts.
Another fun story - my one-time boss was previously the head systems engineer for our corporate IT back in the 70’s and 80’s. he shocked the hell out of IBM by going with an Amdahl clone when they tried to be difficult. His biggest gripe with Amdahl was they only made one computer- if you opted for the cheaper versions, they inserted a card into the CPU that stole cycles so the same computer effective ran slower.
Sad, isn’t it? I wish I could control myself better about this. It’s even worse than a too-many magazines habit, since at least magazines can be finished.
When I was in college, the fact that Multics was written in PL/1 and not assembler (like CTSS) was considered controversial, or that’s what my systems programming professor said. He was pro-HLL for OS writing.
There were systems implementation languages like BCPL which were used. BCPL came with Multics. I used it in grad school, and it looks like a predecessor to C with much of the same syntax.
Please don’t control yourself. We appreciate all your great posts and insights!
Back in my day, in the realm of computers that I was working with at the time, assembly language was universal for writing the OS, though CS courses did talk about systems implementation languages. In my world at the time this was theory, not practice. In my particular domain of DEC computers, in fact, the OS was distributed as assembler source code, and the SYSGEN process really consisted of setting a large number of assembler parameters and conditional-assembly switches. For convenience, it was often front-ended by an interactive UI, but a vast set of assembly parameters and switches was the output. The assembler then produced the OS image. It was not until VAX/VMS that the OS was dynamically generated from object code modules.