I have a couple of questions about the convention of describing video game consoles as X-bit where X= 8 for an NES, 16 for a Genesis, etc.
First off, to what real-world property does that number correspond? Secondly, I’ve often seen claims that bits “don’t matter” anymore–the current-generation systems vary wildly in their bit number but have roughly the same power. Is this true? Did they ever matter? What has changed? What is the best single measurement of a system’s capabilities if it’s not bits?
To answer your first question, “bits” refers to “binary digits”. Basically it has to do with how large a number can be represented.
In binary, a number looks something like:
01100011
That’s an 8-bit binary representation of the number 99. With only 8 bits, the largest number you can store is 11111111 which is 255. This affects things several ways. For example, maybe you can only have 256 possible colors displayed at once.
Or, saying a machine is only 8 bits may refer to the length of the address register for that machine. So, an 8-bit machine can only address 255 bytes of memory; obviously not a lot.
With 16 bits, you can store a lot more data. 1111111111111111 is 65,535. Now, you can have lots more colors and can address up to 64k of memory.
Going to 32 bits you wind up at 4,294,967,295. That’s 4 Gig. Obviously there isn’t a monitor made that can display 4 billion different colors at the same time and there isn’t a machine out there with 4 Gigabytes of main memory. That’s why they say that bits don’t matter so much anymore.
As for the best single measurement of a system’s capabilities? Which console has the most games you want to play? It doesn’t matter how powerful a machine is if you don’t use it. (IMHO, of course)
16-bit is not just a measure of memory capacity or available colours, it’s also an indication of instruction length. The components within a 16-bit game system are connected to each other by 16 parallel wires, and in one electronic pulse, each can send a 0 or 1 bit. Thus, component A can transmit 16 bits of information to component B at a time. The longer the instruction is, to more information can be passed, i.e. the instruction for “move the blue hedgehog one pixel to the left” may require 16 bits, while the instruction for “move the blue hedgehog one pixel to the left and make his head explode” may require 32 bits.
Personal computers went through a steady evolution, from the 8-bit 8088 IBM XT to the current 32-bit chips, with 64-bit Intel “Itanium” machines coming soon. The larger the instruction size makes for more complexity, but it also means that each instruction can be more detailed, making the machine’s operation considerably faster.
This is all ridiculously oversimplified, you understand.
To sum up, there was a set of “8-bit” consoles, a set of “16-bit” consoles, and a set of “32-bit” consolses. That whole nomenclature is now basically irrelevant, so if you’re (as you likely are) choosing between PS2, Gamecube and XBox, just buy the one that you can find the best deal on, or which has the software most to your liking, or which isn’t manufactured by people who are evil (cough Bill Gates cough). Play them all and pick one. Don’t worry about bits.
To (hopefully) simplify, X bits is the width of the data bus. Think of it like a pipeline between the memory, periphrials(namely video) and the processor. Larger pipeline=more data flow per instruction cycle=faster computer. It really does still matter, but not as much(generally) as the actual computing power (expressed in floating point operations per second) In the realm of gaming, none of this really matters as much as how well the developers take advantage of the hardware(look at how Atari’s faster 16-bit color Lynx failed to Nintendo’s 8-bit modified Z80 based 4-shades-of-green Gameboy)
I’m planning on buying all of the current systems, although my checking account doesn’t know that yet. I’ve recently acquired some older consoles and was curious about the nomenclature, since everything else makes some sense to me in PC terms. Thanks for the answers.
My monitor could display 4+ billion colors at once. Any modern CRT display could do it. The problem is with the video card’s capability.
There have been >4 Gigabyte machines for several years now.
The number of colors, the size of instructions, the word size and the width of the bus are all unrelated numbers. When a platform maker decides something is “64-bit” they pick the one component that matches “64” and that’s where they get it. It is advertising. If you are interested in such matters, read technical reviews and find out what is really going on inside.
Instruction/word size and bus width in no way tell you the amount of addressable memory. Look at the AT addressing limits.
The 8088 XTs were “16-bit machines” (word size) with 8-bit bus size (and effectively 10-bit address space). Note that the 8086 which was 16-bit for both was also available but IBM …