At the end of Chapter 3, starting when Wozniak was in high school (class of 1968)
“I got manuals for minicomputers from Varian, Hewlett-Packard, Digital Equipment, Data General, and many more companies. Whenever I had a free weekend, I’d take catalogs of logic components, chips, from which computers are made, and a particular existing computer description from its handbook, and I’d design my own version of it [on paper]. Many times I’d redesign the same computer a second or third time, using newer and better components. I developed a private little game of trying to design these minicomputers with the minimum number of chips.”
After a while,
“I was now designing computers with half the number of chips the actual company had in their own design, but only on paper.”
Can’t tell what he means – his designs had half the chips because he was using better chips, or was he using the same chips twice as efficiently?
I agree, it isn’t clear. My take, based solely in the excerpts you quoted, is both: that he designed (or perhaps only identified) efficiency improvements first AND updated components as they became available. But someone who can do one can do both.
From other sources, it seems he used the same tech but made it more efficient. For example, could he redesign it to make memory usage more efficient so you need fewer RAM/ROM chips. Or could you have one timer chip instead of two. Or we can accomplish what this chip does by breaking it up into these other three chips. Things like that.
Mapping Boolean logic functions to the minimum number of discrete logic chips (for example 7400 series) was not a trivial exercise. A minimum NAND realization would be fairly straightforward using Karnaugh maps, but minimizing to a larger number of available logic functions which was growing every year is not nearly as straightforward. 7400 series logic used to include a lot of combinatorial chips line AND-OR-INVERTS of various sizes. Remember this was decades before logic systhesis tools.
Optimizing for fewer chips would not necessarily mean optimizing for highest performance both in speed and cost. The latest and greatest chip might very well cost more than the equivalent function implemented with two or three separate chips. If the logic isn’t on the critical timing path you would use the cheaper implementation even if it is slower.
A manufacturer will need to freeze the design for manufacturing. You can’t be constantly redesigning your board just because some newer chips come out - you design the newer chips into the next generation. Woz could change his paper design as often as he wanted. A manufacturer needs to run a given design long enough to recoup the development costs.
IMO mostly this. In that era progress from chip generation to generation was fast and large. Such that as any given board was first being delivered to customers it was already made at least partly of obsolete components.
Woz was a very bright, skilled guy. But I wouldn’t read mad genius skillz into his being able to iterate untested unbuilt designs more quickly than made sense for a factory seeking a profit.
On the other hand, “development costs” for the big manufacturers meant paying a whole big team of engineers to figure out how to arrange the chips. Woz, by himself, was coming up with designs. In other words, if those companies had Wozniak working for them, those development costs could have been lower.
As a practical example - the disc (diskette) controller for the Apple II was IIRC done using software. Presumably, old style disc controller had most of the logic on discrete chips and a stand-alone processor for the controller. Woz instead used software on the CPU as the controller, and presumably the actual controller was just another set of IO chips to implement assorted controls. It made sense, since unlike minicomputers, the Apple had only one user and they were willing to wait (pause or slow existing functions) to do disc IO.
I suspect he came up with similar clever designs on paper to have chips do double duty. Plus, as others point out - the semiconductor industry was changing by leaps and bounds. I started university paying $100 for a calculator that had 4 functions plus percentage, and the fourth year I bought a full scientific calculator for $29. No doubt logic chips were becoming equally condensed and complex.
Another factor - not sure if this is relevant, pure speculation since I haven’t looked at this for decades - but one thing is “pin-out” count. How many external chips can a logic gate drive? As the tech got better, this removed the need for extra buffers to ensure a signal was powerful enough to drive enough other chips - i.e. how many memory chips can a data bus driver connect to? Also memories went from flip-flops to 8-bit data input on 1 chip. If the original design talked to 8 1-bit-wide memory chips, and the new chip replaced 8 of the previous - savings. And so on…
Somewhere in my stuff I have a reprint of the Radio Electronics article/plans for an 8-bit computer (predated the Altair) that used the 8008. It had a secondary plan for an expansion board that added 256 bytes of RAM!
As a micro example of this one of my EE lab projects as an undergrad was to build a digital stopwatch. We had 60hz low voltage AC as a clock signal and needed to implement start, stop, clear, and lap timing where the display could be frozen and later unfrozen while the clock timer continued to tick forward. The tech of the day was 7400 series TTL DIPs and discrete resistors, capacitors, etc. All breadboarded with wire wrap sockets.
My design had about 30% of the chip count of everybody else. Because I had lucked (no skill claimed) into finding a new chip that incorporated the single digit 7-segment display, and the decade or divide by 6 counter, and the freeze display / lap counter feature all in one chip. So I needed one chip for each display digit where my classmates using straight 74xxs and dumb 7-segment digit chips, needed 4 or more per digit. I also cheated by not debouncing the stop, clear, and lap function buttons.
Was I smarter? Nope. Just lazier and luckier.
The real moral of this story is the chip catalogs were forever coming up with new more capable stuff. And what works for prototyping may not be available in production quantity (or production reliability quality) as fast as a factory needs.
The mini and mainframe computer companies weren’t really about hardware. That was just a way to lock your clients into your ecosystem of software, support and consultancy. Which is where the real profit lies. Why waste time and money trying to improve something that was already good enough?
Plus, if you significantly change the design you have to train all your engineers on the new design. And they need to carry spares for the old design and the new. Which all adds extra cost.
Woz was a hardware guy first and foremost, so it makes sense that’s where his priorities lay. Jobs eventually went the traditional route of using hardware to lock people in via iTunes and the AppStore.
I agree - the big computer makers avoided the fun situation with autmobiles, where each year significant elements change, and many spare parts are often quite specific to the model. Even others - the PLC’s (Programmable Logic COntrollers) that I saw in the factories, even at the time when Pentium was the level of desktop computer, were still 8080 (or Z80) and 6800 (not 68000), over 10 years old, since that was simple enough to accomplish the job of controlling basic production equipment, and the hardware did not need to be recertified and software did not need to be rewritten and retested in a situation where an “oops” could potentially do a million dollars’ damage.
Besides, there was no reason to make major changes in a design that was being mass produced. Improving the disc controller, for example, does nothing unless the discs themselves are faster too, since that is the slowest part. Improving the processor with fewer pieces does nothing if the processor itself is essentially limited by its clock speed. So more like automobiles, it was better to take the existing version and keep producing it, and instead work on the next model, due a few years later. Then, introduce all the changes at once and have a more powerful model of computer overall.
I wonder too if redundcancy was a thing. Were chips more likely to fail the more strain (higher power demand) from the job the were meant to do… Was a chip that was pushed to the design limit powering other chips’ inputs more susceptible to failure, in designs where the computer maker was utlimately responsible for replacing parts?
OTOH, never discount the possibility that the other designers were just not as smart as Woz.