8-bit, 16-bit - What do these mean?

I’ve seen video game systems referred to as 8-bit, 16-bit, 64-bit, etc. What does this mean?

Also, what does 32-bit computing mean?

bit count can refer to A LOT of things regarding computing (this is obvious, since bits underpin everything).

In reference to video game consoles, they refer to the width of the data bus to memory. IE, how many physical wires there are running from a processor to memory. If you have an 8-bit bus going at 100MHz, you have a max theoretical bandwidth of 100MB/s (800Mbits/s) because each wire can transport 1 bit per Hz.

You’ll note that memory width is a bit meaningless without specifying frequency, so “bits” is mostly a fluffy marketing number in this case. However, the greater memory widths and bandwidths of successive generations of consoles allowed them to pump more bitmap data to/from the graphics memory. This allowed for higher resolution rendering (output), higher resolution textures (input), and even the use of multiple input textures to create a single pixel of output (the standard nowdays). However, many, many other factors go into the performance of a video game system beside raw memory bandwith.

Interesting note: memory width, which once used to scale rapidly, has hit a ceiling. The problem is that the memory bus isn’t on a silicon chip but on a printed circuit board. And there is no Moore’s law for PCBs. Technology to increase the number of wires that can go to or from a chip does not advance quickly and is limited by mechanical fabrication techniques (none of this shine lasers onto silicon BS). 256bits, which has been the standard for high-end video cards for the past three-four years, is almost as wide as is practical. You’ll notice that 128bit and 64bit datapaths are still often used because their cost has simply not come down by much. CPUs (always lagging behind GPUs in terms of memory) currently typically use 128bits (“dual-channel”) but will move to 256bits within a year (AMD’s quad-core architecture has quad-channels). 512bits may yet appear in consumer video cards, this year it made an appearance in “GPU Stream processors” and workstations, but 1024 is almost definately out. 2048… no way. Just too expensive. Of course, frequencies continue to go up (especially for signals…processor frequencies have kind of stood still lately) and overall memory bandwidth will continue to increase for some time. In the long term, however… yeah, there might be problems and more and more memory will eventually migrate onto CPUs/GPUs. However, it’ll be using transistors (6 for every bit, to be precise) which could have been spent on logic.

As for “32-bit computing”… that means something else entirely. It refers to the size of the integer data type in a processor. A 32bit (unsigned) integer has a maximum binary value of 11111111111111111111111111111111, which comes out to about 4 billion. The important thing is that memory addresses are represented as integers. If your integers can only go up to 4 billion, so can your memory addresses. Thus, the limit on addressable memory for 32-bit systems (without using tricks) is 4GB. 64bit systems use 64 bit integers and can thus address far more memory. 64bit integers, however, do not have much value for doing computations (if you really really need to use more than 32bit, 64bit probably won’t be enough either. you’ll have to do some sort of emulation using a bunch of 32bit values). Therefore, the shift to 64bit integers has come about only because of the need to address memory. Using 128bit integers is perfectly possible, but there just isn’t any point and there won’t be for a very, very long time.

Note that floating-point numbers have been 64bits (double precision) for several decades, because there the need for precision is great. 128bit floating-point numbers have been used in some systems, but that is mostly overkill.

Bits refers to the number of digits in a binary byte. A byte is the base unit for computer data.

An 8 bit byte (octet) might look like this 00000111 or 11110000 etc

Each of these bytes represents a single unit of data, so the more bits per byte, the greater the number of distinct data units that can be represented by a single byte.

An analogy would be the number of letters in the alphabet. An alphabet with few letters would require very long words, an alphabet with many letters could have an increasingly large number of distinct short words - the extreme case would be a writing system where each word was represented by a single distinct character.

A 16 bit system allows more distinct bytes than an 8 bit system - complex information can be recorded in a smaller number of bytes.

I don’t know how well longer bytes converts into a real world performance improvement in PCs or Nintendos.

Officially, it refers to the size of the registers in the processors. Processors have registers which temporarily hold data for calculations to be processed. Before it does any alterations to anything in memory, it has to be moved to a register first. So an 8-bit processor has eight transistors which can be set to 1 or 0 (binary). With eight of them, you can hold an unsigned integer from 0 to 255 or a signed integer from -128 to 127 (a total of 256 values in both cases). A 16-bit processor can hold unsigned integers from 0 to 65,536 or signed integers from -32,768 to 32,767. Of course it just goes up from there. For some operations, a single unit of data can be held in two registers effectively giving you double the bits, but this typically takes more time.

Sometimes the video game companies lie though. The Turbografx 16 was supposedly 16-bit, but in actuality it just had 2 8-bit processors similar to the processor found in the NES. I’ve heard the SNES wasn’t truely 16-bit either, but they called it that because it had 16-bit graphics. In the case of graphics, 8-bit means 256 possible colors for each pixel, 16-bit means 65,536 possible colors, etc. While the size of the registers does make somewhat of a difference, it’s not nearly as important as other things.

It generally means the number of bits in a common data path within the micro processor of the video game or the computer. In general computer process data in small chunks. 8, 16, 32 and 64 bits generally refer to the size of these chunks.

Sometimes it means the size of the words used to keep tracks of addresses in memory. A 32 bit system can use up to 2^32 or 4 megabytes of memory a 64 bit system can address much more.

A lot of times it refers to both things.

32 and 64 bit computing usually are talking about the size of the possible address space, the second usage.

Obviously just a typo, but a 32 bit system can address 4 Gigabytes of memory.

Themenin… did you imply that 16-bit systems have 16-bit bytes?

Put simply, when you see a video game system declaring itself a “8-bit”, “16-bit”, “32-bit”, etc. system, you are seeing marketing-speak for, “We found something somewhere that takes this many bits.” Maybe it’s the bus width (even if it’s the memory-to-video bus) , maybe it’s the processor’s addressable memory limits (even if the actual memory is much smaller), or maybe it’s an actual useful number like the naitive floating-point size of the graphics processor. It’s just some number that someone in marketing found that projects the image that they want to sell.

32-bit computing, as gazpacho notes, usually refers to the maximum addressable memory that an application can access. Depending upon the operating system, however, a 32-bit application might be limited to either 4GB or 2GB, or slightly under those limits.

Not to contradict anything that anyone else said, but if I didn’t already know what you guys were talking about, I sure wouldn’t know what you guys are talking about.

OK, imagine this dot, right here: •

Let’s say I’m allowed only one bit of information to describe it.

Bits are either on or off. “On” can mean “yes” while “Off” means no, but you only get two possibilities regardless of how they are interpreted. For our dot, “on” could mean “black” and “off” could mean “white” and then on a black and white screen I can either make that dot black or white. Can’t make it grey or purple or chartreuse, though, I only get two states to choose between.

If I had a whole bunch of those dots (let’s call them pixels, shall we?) and I have one bit for each one of them, I can turn any of them off or on. Black or white. I can make a white background and then represent the letters of the alphabet with black pixels in the shapes of the letters.

Sir, may I have two bits to play with? I can? Yippee!!!

Look what I can do with two bits!

Bit A on Bit B on = white
Bit A on Bit B off = light grey
Bit A off Bit B on = dark grey
Bit A off Bit B off = black

Now I could make a white rectangular area against a dark grey background, give it a black border 20 pixels wide, and then put a label (a bunch of pixels in the shape of letters making up words) in light grey.

Give me four bits and I can have up to 16 colors. That’s a poor kid’s Crayola box’s worth of colors! I can draw pictures! I can do (ugly looking but recognizable) representations of color photos, even!

The number of possibilities is the square of the number of bits you get to play with. And just as it lets me do cool things with a screen, extra bits let programmers do cool things in general — the more bits, the more options; the more possible combinations, where each combination can mean something specific.
Hope this helps. (I suppose I’m arrogant for thinking this description will be more accessible, but so be it…)

That is overly simplified and misleading. The colors analaogy makes sense when you are describing the color quality (i.e., the number of bits associated with a pixel) but it is not really applicable to how game console makers use the term x-bit. As other posters has said, that can refer to how much data can be processed in one instruction, how much data can be transferred in a unit of time, or the range of memory that can be directly addresseed.

Good explanation on how computer graphics work, except that the quoted sentence is incorrect. If the number of bits is x, then the number of possibilities is 2^x. With 8-bit, you have 256 possibilities, not 64. It’s basically the same as any number system. If you have a random 3 digit decimal number, then there are 10^3, or 1,000, possibilities, assuming leading zeros are allowed.

Also, as an assembly programmer, I’d just like to point out that the graphics bitwidth refers to the number of colors that can be displayed on the screen at the same time, not necessarily all of the colors possible. For instance, with 8-bit color, you have a palette with 256 colors, and you can choose one color from that palette for each pixel. However, you can usually set the colors that go in that palette, and on computer video cards (at least back in the old days), you set 3 8-bit values for the amount of red, blue, and green. Consequently, you can show 256 colors on the screen at the same time, but a total of 16,777,216 colors by altering the palette. This probably doesn’t go to answering the original question, but I figured it should be mentioned with all this talk about computer graphics.

:smack:

That is of course what I meant.

I don’t think this is completely correct. A register is made up of a chain of flip-flops, which is made up of a series of logic gates (AND/OR/NOT, etc), and those are made up of transistors. So yes, an 8-bit register is made up of transistors - just not 8.

Flip -Flop made up of logic gates:
http://www.cs.umass.edu/~weems/CmpSci535/Discussion8.html

Talks a little about a register being an array of flip flops:
http://www.rz.uni-hohenheim.de/hardware/basics/csc102/ch12.html

Wiki logic gates:

It is possible to have a flip-flop with only one clocked transistor. I don’t remember how, but it’s possible. So theoretically you could have an 8 transistor 8-bit register. You usually don’t, but you could.

To clarify a bit about what “32-bit computing” means: not necessarily the same as what a “32-bit” or “64-bit” game console means.

As others have already mentioned, from a hardward perspective the bits being counted refer to the width of the memory bus, which among other things in turn dictates the largest addressable memory space for a process.

However just because the memory bus is a certain width doesn’t always mean the operating system is taking full advantage. I remember programming for Windows 3.1 back in the early 1990s on 80386-based computers. The 80386 was a 32-bit processor but for the sake of backwards compatibility with 80286 and 8088 based device drivers, Windows was still a 16-bit operating system… With various hack-like API extensions for 32-bit computing to allow addressing beyond 2^16 bytes via a kind of virtual “chaining”, different types for “far” and “near” pointers, etc.

As a result, one of the developers I worked with at the time preferred worked in assembly rather than C for his graphical work specifically to have a clean, flat 32-bit memory model availble to him.

Similarly, I am currently working with Sun workstations running 32-bit versions of Solaris (the Sun operating system that is a variant of UNIX) on 64-bit machines (in an mixed environment where we still have mostly 32-bit machines available). While the systems themselves support physical memory configurations of more than 4GB, the operating system can still only run processes that max out at 4GB in size.

I have never heard of a one transistor flip flop and would need to see the circuit to believe the claim. Dynamic ram cells will have one transistor per bit storing charge on the gate of the transistor to indicate one or zero. An Sram cell is not a flipflop. People might make register files in a processor out of srams but I would think that is pretty uncommon.

A simple flip flop has two inputs and one output. inputs D and Clk output Q. The value on Q gets passed to Q on a rising edge of the Clk pin. (Sometimes it is the falling edge) You can add to this with other pins that set and clear the flop enables and muxes on the inputs are common. A very important characteristic of a flipflop is that the Q output is available all the time. The one transistor storage elements I have seen involve a read phase of time that make them quite different in how they are used from flip flops.

Now that I think about it sram cells don’t store charge on the gate. They store it in a capacitor on the source or drain of the transistor and the gate is used to control read or write. Flash stores charge on the gate.

I’ve been racking my brain about this in the shower and now it occurs to me that I was probably wrong. Unless there is some sort of a D flip-flop that can be created out of a transistor-capacitor DRAM cell I can’t think of any way of accomplishing this.

SIMPLE ANSWER: The number of bits determines what size numbers a computer can handle.

The bits would be a simplified measure of what a computer can do, sort of like saying a car has a 6 cylinder engine. A 6-cylinder engine could produce 150 horsepower, or it could produce over 300 horsepower, depends on how it’s set up.

That should say: What size numbers the computer can handle in one operation By breaking a problem into many steps, almost any computer can handle arbitrarly large or precise numbers. However, with more bits available for each operation, larger numbers can be handled in fewer operations, which takes less time…much less time in many cases.