Computer "bits" and memory.

Yup, this is the single biggest win. It accounts for up to a doubling in performance, all by itself. (Depending as usual on exactly what it is you are doing. Easily a 50% improvement on many codes.) Where it loses is that addresses have doubled in width, and that means your pointers are twice as big, and so cache doesn’t hold as much program data - which can be important and so you lose speed here. This is enough of a problem that there is now a X32 ABI under development - 64 bit data, with the 16 64 bit registers, but addresses are restricted to 32 bits, and the compilers will only store the low 32 bits of the addresses. So you get all the advantages of the x86-64 ISA, but keep the advantages of the tighter memory of the 32 bit addresses. The reality being that the vast majority of codes don’t, and never will, need bigger than a 32 bit address space.

Actually that’s the exact definition we got in my computer hardware course when I was getting my CS degree. So for example we did assembly on a 68000 emulator and that chip was described as 16-bit because it had a 16-bit databus. (Although apparently the ALU is 16-bit but they do 32bit arithmetic by doing the lower section of 16-bits and then the upper.) Of course this was like 20 years ago so I might not be totally clear.

There is no specific definition of what bit width indicates. It means different things in different contexts, and there is no privileged context.

Again, context. When discussing bus width and addressing, you are referring to the largest address which can be placed on the bus at one time. That’s all. There is a difference between logical and physical addressing at many different levels, and your individual perspective doesn’t make any of them the proper one. So stopping making those baseless assumptions.

Here’s what John R. Mashey has to say about ‘bittedness’ in computer architecture:


historically, when people called a CPU "N-bits",
the "N" normally meant the width the regular integer registers,
        NOT the width of floating-point or other special registers
        NOT the width of external busses
        NOT the width of other special datapaths
        NOT the size of instructions

Occasionally, as in the Intel i860 (where a 32-bit CPU with a 64-bit bus
was labeled a 64-bit CPU), or in games/graphics devices labeled as 128-bit
devices (that normally incorporate 32- or 64-bit CPUs using the historical
definitions), marketing gets the upper hand, at which point some people
declare that the terminology is confused.  As far as I can tell, people who
actually design CPUs rarely are confused about "bittedness", and marketing
diversions have been relatively rare.


Given that Mashey helped design MIPS and is one of the founders of the SPEC benchmarking group, I’m inclined to believe him.

True for the OS, but (as I understand it) applications could only ever use 32-bit pointers, so they were limited to the 4 GB address space.

I agree with TriPolar that the term, when unqualified, is ambiguous.

But I’ll agree with Derleth that it’s most commonly the ALU bit width, and the width of the general-purpose registers, that’s the one most commonly meant when unqualified. (Chips where the ALU width doesn’t match the GP register width are oddballs we needn’t be concerned with here. I can’ t think of any off the top of my head, but I’m sure there’s an exception to every generalization about CPU architecture!)

However, a CPU is many things and has many things, all of which can have different widths, so to say “CPU bit width of xxx” is to be ambiguous.

Derleth is also correct that the big difference (address-wise) for 64-bit architectures is for a single program to be able to directly access over 4GB of data. Even the OS can’t do that, but it can set up a number of programs to run concurrently that together access more than 4GB.

Bit-width can be ambiguous. For example, the IBM 370/145 had almost all its registers and data-paths 32-bits, but the bus from main memory was 64 bits, with half the data discarded in almost every (but not every!) case. Its ALU was only 16 bits but it did 32-bit arithmetic by simply cycling the ALU twice in one instruction cycle. (The ALU handled only 8 bits per instruction cycle when doing Boolean ops – the data was duplicated and the halves of ALU output compared for error checking. :eek:)

Totally did that, although I will provide no evidence of it whatsoever. Therefore you have nothing to lose by spilling all the beans :wink:

Some GPUs use data bus widths as large as 512 bits, although they are still only 32 bits internally (note that the article ends with erroneously calling it a “512 bit GPU”).

Also, the number of actual address lines isn’t necessarily what you might expect; a CPU with a 64 bit data bus accesses 8 bytes at once, which can be seen as the lowest 3 address bits, so to address 4 GB, you only need 29 physical address lines, the same is true of a RAM module with a data bus multiple bytes wide (64 bits for DDR, with modules often combined so that the total bus width is larger).