Actually, NOT gates are superfluous.
NAND is (~(A&B)). If both inputs of the gate are connected to the same source, however, the gate will compute (~(A&A)). Since (A&A) always has the same (truth) value as (A), the output of the NAND gate is (~A). (The same trick can be with NOR gates, but NAND logic is simpler to etch, thus cheaper; the actual implementation of NOR logic does not, I think, have any corresponding advantage.)
Now, a NOT gate is certainly simpler than using a NAND gate for NOT (one transistor versus four, as Karl points out). But then it’s necessary to etch two different gates into the chip. It’s cheaper to etch one, and bite the bullet in terms of overhead.
“Kings die, and leave their crowns to their sons. Shmuel HaKatan took all the treasures in the world, and went away.”
Overall, this article did a very good job of summarizing the digital-logic/hardware layer of a conventional computer. However, I personally feel it glossed over the whole issue of “coded machine instructions” too lightly. This is, after all, where the real power and flexibility of a general-purpose computer lie.
In a nutshell, the Program Counter (that thing that keeps getting 1 added to it) points to an “address” in memory (“addresses” being sequential numeric labels for (usually) 8-bit bytes). The CPU has logic circuits in it that “decodes” the number stored in this memory address, and interprets it as an instruction for what it should do next. It might interpret 00000000 as “add”, 00000001 as “subtract”, 00000002 as “jump to a different address”, 00000003 as “jump to a different address IF the last add or subtract resulted in zero”, 00000004 as “store a new number at thus-and-such address in memory”, et cetera. The CPU will then engage one or another set of logic gates to perform whatever operation it just decoded, then (usually) add one to the Program Counter, and fetch the next instruction at that new address.
It’s these encoded “machine instructions” that Quake 2 consists of.
Not all machines are binary, though I guess they all are nowadays. In the 40’s, 50’s and early 60’s, most machines were decimal. (The underlying circuits were binary, except, I understand, for one Russian trinary machine.)
On the IBM 1401, the “VW Beetle” of computers for roughly the 1960-1966 period, A was Add, B was Branch, C was Compare, S was Subtract…
John W. Kennedy
“Compact is becoming contract; man only earns and pays.”
– Charles Williams
SDStaff Karl says: << Oops, something happened, [Jebediah]'s right: “Apply the appropriate voltage to the base, and current will flow from the emitter to the base.” should be “Apply the appropriate voltage to the base, and current will flow from the collector to the emitter.”
We’ll correct the item. Thanks for catching it, Jeb.
I would not classify the bit about memory being made of transistors a “mistake”, merely a “simplification”. While it is true that the many megs of memory on our desktops are capacitor rather than transistor based, this is relatively new technology. The capacitor-based memory, also called DRAM (for DYNAMIC Random Access Memory) needs to be constantly refreshed to keep its contents from eroding. This refresh circuitry is quite complex, but worth it when you have a lot of memory to deal with.
For smaller amounts of memory, the older SRAM (STATIC random access memory) truly IS constructed from transistors. You will find this type of memory in smaller devices where memory is measured in Kilobytes rather than Megabytes.
If you really like to be picky, you might note that the terminology used with the transistors applied to bipolar junction transistors (emitter, base, collector) which are normally controlled by the flow of CURRENT, while the description of voltage controlled devices are really more appropriate to field effect transistors (source, gate, drain). I don’t like to be that picky, and think it was a pretty good explanation.
[[NAND is (~(A&B)). If both inputs of the gate are connected to the same source, however, the gate will compute (~(A&A)). Since (A&A) always has the same (truth) value as (A), the output of the NAND gate is (~A). (The same trick can be with NOR gates, but NAND logic is simpler to etch, thus cheaper; the actual implementation of NOR logic does not, I think, have any corresponding advantage.)]]
Watch it there, kid. CK is out of town and I’m watching the board, and I’m likely to delete language like that.
Jill
It was just as true years ago as it is today with SRAM and DRAM. The main RAM then was made up little ferrite doughnuts that could be magnetized clockwise or counterclockwise – but the fast memory (e.g., inside the ALU) used flip-flop circuits, even when that meant using two vacuum tubes per bit.
In the early 70’s, SRAM was used for the main memory of some mainframes, such as the 370/145. Since each bit constantly consumed current, the 145 put out ten times the heat of the 360/40 it replaced. (Fortunately, we installed ours in winter, and we were able to run with the emergency stairwell open to the outdoors while we awaited delivery of a new HVAC.)
John W. Kennedy
“Compact is becoming contract; man only earns and pays.”
– Charles Williams
[[The main RAM then was made up little ferrite doughnuts that could be magnetized clockwise or counterclockwise – but the fast memory (e.g., inside the ALU) used flip-flop circuits, even when that meant using two vacuum tubes per bit.]]
Wow… that article has so much information that it makes my brain hurt to read it. I’ve been using computers for some 10 years, and I am pretty much a geek, but… WOW.
I like the article, forget the logic/electronic mistakes or rather over-simplification, I bet the guy that asked didn’t know what he was getting into. And Jill, I’ve been around since vaccuum tubes and ferrite donuts, I had no idea it would impress you tho.