8-bit, 16-bit - What do these mean?

Either you are talking about the size of integers or float-point numbers. In which case that is only a part of the whole answer. Or you are talking about numbers in the most abstract way, which will yield your answer non-constructive.

To repeat. Regarding video consoles, it’s about the number of bits that can be read from memory in a single clock cycle. Regarding color, it’s the number of bits used to represent a single pixel. Regarding “64-bit computing,” it’s the number of bits that are used in a single integer variable, and hence the number of bits that are used to construct a memory address. Regarding Intel Core 2’s new 128bit-wide floating point data paths, it’s the number of bits that can be moved between registers and into the arithmetic units in a single cycle.

Actually, regarding consoles the marketing number may sometimes be a mix of those real measurements.

What limits bitness in an architecture is either: the difficulty of placing and routing wires (on the PCB when talking about memory, but also on the chip when moving data from one place to another) and the difficulty of constructing a multiplier/divider/etc that will deal with large numbers (when talking about number data types).

Placing and routing wires is usually the dominant concern for all kinds of hardware-related bit measurements, especially in the modern age. But the wires may relate to a lot of different things, serve many various functions, and have all sorts of implications.

Almost always, bitness is irrelevant. It is the signficance for the wider architecture that matters (ie, how much bandwidth you have, how much memory you can address, what the cost-benefits are for implementing some particular thing). That is always the case with engineering, but here especially.

You are correct. I meant to say an 8-bit register has 8 flip-flops.

I’m not the only one who is incorrect, though. Those saying that 64-bit computing refers to the memory bus are incorrect as well. I remembered reading in my Sledgehammer programmer’s manual that it is less than that. I don’t have the manuals here, but I did manage to find an article on the Internet.

Even including virtual memory, you don’t have 64 bits of memory addressing.

We are getting deep in the shed here. There is a difference between the size of an address and how much memory a processor can access. In the case you cited the Atholon used (or was abe to use) 64 bit addresses; i.e., it could load or store to an address range from 0x0000000000000000 to 0xFFFFFFFFFFFFFFFF. The actual physical memory that the processor could accomodate was only 2^40 bits. From an application programmers standpoint however, it would appear that they have the full 64-bit address range avalilable to them.

The description of virtual memory in the article you quoted is not particularly precise. VM may or may not be associated with a swap file on disk. In real-time systems it often doesn’t. The implicatioon of virtual memory is that the addresses used by in CPU instructions are “virtual” (i.e., pretend) and are translated to phyical addresses though page tables which map virtual addresses to physical addresses. An example might make this clear. Imagine a program written so that it thinks it starts at address 0x0. Without virtual memory you could only have one instance of that application running. With virtual memory you can have multiple applications running at the same time that all think they are accessing 0x0. The virtual memory subsystem will give each of them a unique range of physical memory to run in.

I knew my “simple answer” would not satisfy you geeks out there. If anyone has noticed, the OP hasn’t been back, probably overwhelmed by the answers. :smiley:

I thought your simple answer was actually pretty good for the audience asking the question.

I know of too many processors with different data bus/address bus/register size/etc. etc. etc. that I think it would be hard to have 1 simple answer. Here is a question for you techies: was the 8088 8bit or 16bit? How about the 6809?

The register for memory addressing may be 64-bits, but what good does that do if the bus can only support 48-bits in paging mode? Why would you use a value greater than 2^48? What would happen if you did? I actually know little about 64-bit assembly programming. I really wish I had my architecture guide here.

The memory bus is seeing physical addresses; i.e., after they have been translated by the memory managment subsystem.

There is a piece of hardware called the TLB that translates between the 64-bit address that the program works with and the actual physical address. Part of the operating system’s job is managing that lookup table. When a section of memory haven’t been used in a while, the operating system will find a space on the disk drive to put that memory and save where it put that memory in another space on the disk drive. If a program tries to access a piece of memory that has been saved to disk, the TLB generates a signal called a page fault that suspends the application and wakes up the operating system. The operating system finds out where it had put that piece of memory, finds a space in physical memory for it, puts the translation from the virtual address to the physical address in the TLB, and returns control to the program.

Actually, it’s a little bit more complicated than that, because the TLB isn’t big enough to hold all of the possible translations, so a page fault can occur on memory that is actually in physical memory. The operating system has a locked down piece of physical memory that it uses to quickly assess whether a page fault has occurred on a virtual address that has its data in physical memory or whether the data has been paged out to disk.

It is purely marketing, but could it refer to the size of the processor in the system - which in turn determines the memory bus width.

[/quote]

Interesting note: memory width, which once used to scale rapidly, has hit a ceiling. The problem is that the memory bus isn’t on a silicon chip but on a printed circuit board. And there is no Moore’s law for PCBs. Technology to increase the number of wires that can go to or from a chip does not advance quickly and is limited by mechanical fabrication techniques (none of this shine lasers onto silicon BS). 256bits, which has been the standard for high-end video cards for the past three-four years, is almost as wide as is practical. You’ll notice that 128bit and 64bit datapaths are still often used because their cost has simply not come down by much. CPUs (always lagging behind GPUs in terms of memory) currently typically use 128bits (“dual-channel”) but will move to 256bits within a year (AMD’s quad-core architecture has quad-channels). 512bits may yet appear in consumer video cards, this year it made an appearance in “GPU Stream processors” and workstations, but 1024 is almost definately out. 2048… no way. Just too expensive. Of course, frequencies continue to go up (especially for signals…processor frequencies have kind of stood still lately) and overall memory bandwidth will continue to increase for some time. In the long term, however… yeah, there might be problems and more and more memory will eventually migrate onto CPUs/GPUs. However, it’ll be using transistors (6 for every bit, to be precise) which could have been spent on logic.

[/QUOTE]

Well, I’m not sure if PCB density strictly follows Moore’s Law, but the smaller and smaller size of discretes and every more routing layers on board has helped. Actually, though, we’re going away from increasing signal bandwidth by increasing the number of signals. (OP, avert your eyes :slight_smile: ) Synchronizing high speed signals is nearly impossible, and there are tons of signal integrity issues. What is now increasingly common is the use of SERDES busses, which serialize a mult-bit signal into one bit, send it at very high speed, and deserialize it on the other side. The clock can be recovered from the signals. This can go so much faster than parallel busses it makes sense. Some processors now pretty much only have serdes busses, which causes all sorts of problems I won’t get into. So it is not always true that an m bit bus has less bandwidth than an n bit bus for n > m.

This is a bit inaccurate. In fact, very inaccurate. While it may be convenient to describe a flop as made out of gate, in reality they are heavily optimzied transistor level designs, possibly with some tristate gates in some cases. They are used so often that there are specialized groups who develop cell libraries, optimized for area or power.

The first diagram shows an RS flop. I don’t think I’ve seen anyone actually use one of these in 20 years. They are conceptually simple, so show up in classes, but have a problem with illegal input states, as mentioned in the text.

The second diagram is misleading also. Most flops change on the rising (or falling) edge of the clock, which eliminates the problem mentioned. Latches, which typically capture on a static clock, do have this problem. Sometimes logic gets put between the pos and neg latches, since values can propagate through the logic when the clock is 1, which lets the design do useful work on both clock values.

BTW, register files, while represented as a chain of flops, might also be optimized.

Alex implied this was wrong. It is wrong. A byte, by definition, is 8 bits. Bits do not refer to the the number of digits - a bit is the basic unit of information, 1 or 0. The size of a word depend on the architecture of the computer, and is not standardized the way a byte is.

That is not true. Most (if not all) computers today use 8-bit bytes but historically there have been many machines with byte sizes other than 8 bit. An octet is 8 bits by definition. I think the best definition of a byte is the smallest addressable unit of memory on a computer.

But nobody uses that definition of a byte anymore. Currently in the industry a byte is 8 bits even if the processor accesses things 16 or 32 bits at a time.

So, is everything clear yet?

I may be overly picky on thi because I’m a long time computer geek going back to the days of the punch card. I did a quick Google search and about 3 out of 4 sites define byte as 8 bits and the rest say something like “typically 8 bits”. Saying a byte is defined as 8 bits is probably safe. I think there are two important things to keep in mind about a byte: 1) a byte is the unit of storage used to hold a character (usually) and 2) a byte is the smallest addressable unit of computer memory.

16-bit, because it had 16-bit registers and you didn’t need to know or care that the path to RAM was 8-bit. The 8088 could run 8086 software completely unmodified, except it was slower due to having to break every 16-bit value moved to or from RAM into two 8-bit pieces.

According to Wikipedia, it had 16-bit registers and a 16-bit ALU. I’m going to say it’s 16-bit, because even though the accumulator could be split into two 8-bit halves you could do 16-bit math in a single opcode. You could also split most of the 8086’s 16-bit registers into 8-bit halves, but nobody could reasonably call the 8086 an 8-bit chip.

I tend to agree with your logic. It’s far more important for me to know the logical limits of the architecture/registers/opcodes vs the physical attributes of the bus.

I heartily agree, especially when the same ISA is implemented by multiple slightly-different physical machines to hit different points of the price-performance curve. (IBM is/was famous for doing this: They invented the concept of an ISA not defined by a single kind of machine so they could make and sell multiple different kinds of the wildly successful System/360 minicomputer: Code compiled for one kind of 360 was binary-portable to all models of the series, albeit with different speed characteristics on each.)

And registers are only part of the equation: It’s more interesting to look at how large of an address you can form without using segmentation or (especially) bank-switching. That goes a long way to determining what the machine can practically be used for, by determining how much RAM people are likely to be able to actually use. Since the 8088 and the 6809 could both form 16-bit addresses without segmentation, they’re rightfully considered 16-bit chips.