Another Binary Question...

Ok, if a 1 or 0 can represent something, then why can’t a 2, 3, 4 etc? Whatever interprets 1’s and 0’s…can’t it decode 0-9 values, let’s say? I don’t get it.

Also, when is hexidecimal use vs. binary? Andm what’s wrong with just plain decimal??? - Jinx :confused:

Of course they can. 1’s and 0’s are just ways of representing ON and OFF. They could be represented as 9 and Z or 4 and P.

We can use whatever symbols we want to represent ON and OFF but 0 and 1 seem to work well. People don’t usually type stuff in binary anyways so the ones and zeros are just our way of represented physically in RAM or on the hard drive.

What if C-A-T was really pronounced dog?

Frankly, a computer deals with 0s and 1s better, and quite fast. The basic “switch” is either a 0 or a 1, and the switch is fundamental to the development of logic gates (such as AND, OR, NOT, etc.). Logic gates are the most basic decision-making parts of a computer.

> What’s wrong with just plain decimal?

Well, for one thing, decimal is a pretty kludgy system. Sure, it works fine for humans, but it is by no means unique, and in fact from a numerical perspective, it probably would have been better if humans had evolved with twelve fingers instead of ten, in which case we probably would have devised a base 12 system, which is better. Why is a base 12 system better? For one, 12 has more integer divisors than 10 (2, 3, 4, and 6 -vs- 2 and 5). Base sixteen (hexadecimal) is also pretty decent (with factors 2, 4, and 8), and it is also a power of 2, which makes it a nice compact alternative to binary (which is why it is a standard base for programmers).

-Tofer

It’s easier to get electronic circuits to react to a HIGH or LOW, or ON or OFF than to decide among 2, 3, 4 or 10 choices. It also means less ambiguity (“was that a five or a six?”) if the signal fades over distance.

Hexadecimal is just a method of grouping binary digits together. If you have four bits, each of which can be ON or OFF, you have 16 possibilities. The hexadecimal scheme just means assign each of these values one arbitrary symbol, and for simplicity these symbols are 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E and F. You could design your machine to ignore any result beyond 9 and keep it all “decimal”, but this would represent a significant waste of potential, much like disabling the sixth cylinder in a car because 5 seems like a “nicer” number.

The reason for using hexadecimal digits (as it were) instead of plain binary is because FF is easier to handle than 255 ones. Note that the computer always works in binary. The hexadecimal digits are for the programmer’s convenience and are patiently translated by the computer into binary for its own use.

It’s not just on/off that benefits computation; you bring all of symbolic logic into the fold by interpreting the signals as true/false.

There has been some work done on trinary computers; it supposedly lends itself to a more efficient encoding scheme. It’s been awhile since I read anything about it, and that was little more than a cursory peek, but it shouldn’t be too difficult to google.

All the other answers are correct, but just to be clear, yes we could design a computer using any base we wanted, not just base 2 (binary). Base 2 just happens to be easiest because all a digital computer is is a bunch of switches, like the light switch on your wall. When the switch is off, we call that “0”. When the switch is on, we call that “1”. Everything a digital computer does is just millions of switches turning off and on, thus binary is the language of computers.

Eight ones. Not 255.

We like hex (base 16) for the same reason we used to like octal (base 8): A machine word of the commonest length can be represented with a few digits of that base with nothing left over. In the PDP-10 culture, where people dealt with 36-bit machine words, octal was used because each octal digit represents three bits. In cultures that used eight-bit bytes, beginning with the IBM System/360 and continuing with microcomputers, hex came into widespread use because each hex digit represents four bits.

(Why don’t we use base 256 if that’s how many values a byte can take? Because having to memorize 256 symbols and values would be a royal pain, even for a geek.)

Hex (or octal) is used when humans have to deal with machine code, because it’s easier for humans to see what’s going on instead of having to count bits. 0x80 makes more sense to me than 0b10000000, even though both represent the same value. Decimal is used when humans don’t care what the number looks like to the machine and just want to get some math done. Decimal is ‘wrong’ because a single decimal digit doesn’t map to some nice number of bits, because 10 isn’t a power of 2.

The real reason why computers like binary is because they tend to melt when you give them anything more than 5 volts. There can be fluctuations in voltage, and you absolutely don’t want ambiguous values coming through, so you need a bit of room separating your characters. I believe the standard is that 0-1.2V represents 0, while 3.8-5V represents 1.

You’re right, of course. I slipped into unary for a moment, there.

Some early computers, back in the era of vacuum tubes and discrete transistors, used decimal. Although the engineers and users may have been more comfortable with a decimal (BCD) architecture, the result was a computer that was slower, less reliable and more expensive than a pure binary architecture. In general, a binary design is faster and uses fewer parts. The advantage of decimal is the ease with which it can be interfaced to character-oriented I/O devices, like punch card readers and line printers. This was an important consideration when logic circuits were expensive and most I/O devices were “dumb”.

See http://en.wikipedia.org/wiki/IBM_1620 for an interesting example of decimal computer.

[pointless nitpick]Hexadecimal is a number base in its own right, as complete and valid as binary, decimal or any other base. Hexadecimal, in its common implementation in computing, is a convenient method of representing groups of binary digits.

There have been trinary logic computers built and, in theory, trinary logic should be slightly more efficient that binary logic but it turns out to be a bitch to program and was generally not used.

One factor is that if you have a 2 input, 1 output logic gate, then there are 4 permutations of inputs so 2^4 or 16 possible logic gates. Of these, 6 are trivial (0, 1, A, B, ~A, ~B), 6 are used (AND, OR, XOR, NAND, NOR, XNOR) and 4 are not used (A AND ~B, ~A AND B, A OR ~B, ~A OR B). If you have trinary logic, then there is 9 permutations of inputs and 2^9 or 512 possible logic gates, many of them which are non-trivial. This means a much more complicated job designing the physical circuits.

Duh, I mean 3^9 or 19,683 possible logic gates.

This is really the key. It’s conventient for us to use. Hexidecimal ends up making your life a lot easier if you deal with bits and bytes for most computers. With hexidecimal, every digit corresponds exactly to four bits. If the first digit is a 3, you know the first four bits are always 0011.

Octal (base 8) is also used, though not as commonly as hexidecimal. Every octal digit corresponds to three bits, so it directly translates to binary as easily as hexadecimal. Octal used to be commonly used with 24 bit computers, which also aren’t very common any more.

Remember in the old days of DOS that there was a 640k memory limit? 655359 (640 multiplied by 1024 then subtract 1) is kind of an odd number in decimal to be some sort of boundary in the computer. Doesn’t make much sense. But, if you convert the numbers to hex, it makes a lot more sense. 655359 is 9FFFF in hex. The next number is A0000. Knowing that “real mode” addressing in a DOS PC is (segment) + (offset), it’s obvious that this is the boundary between the 9 segment and the A segment. Everything in a PC starts and ends on a segment boundary. The BIOS ROM is in the F segment. It takes up addresses F0000 to FFFFF, which are nice even numbers in hex. In decimal, F0000 is 983040 and FFFFF is 1048575, which are not such nice even numbers in decimal. A modern computer running XP has the exact same memory map, except that it also has memory starting at address 100000 hex. 100000 hex is 1048576 decimal. Any time you see a memory map for a computer, all of the numbers are going to be in hex, because decimal is just too much of a pain in the backside to deal with for numbers that begin and end on nice even binary values.

Keep in mind though that the computer works in binary. It doesn’t care if we call the numbers decimal or hexadecimal or octal or whatever. Orgranizing them into hexadecimal just makes it easier for us engineer geeks.

Binary coded decimal (BCD), as mentioned by mks57, is still used quite a bit. Many financial computers and many calculators use it, and some industrial programmable controllers use it.

Well, if we’re going to start posting binary factoids, let me point out the ASCII (American Standard Code for Information Interchage, as I recall) for a capital “A” is 65, while a lower-case “a” is 97. Any particular significance? Well, they’re exactly 32 apart, and the binary representations for each are:

A (65): 01000001
a (92): 01100001

Thus, is a system is going to ignore case and treat “A” and “a” the same, it need only ignore the third bit.

[cg]
That works great for the letters, however it would also treat the following pairs of symbols the same:
’ ’ (space) and nullchar (ascii 0, often used to mark the end of string variables in many programming languages.)
@ and `
[ and {
^ and ~

  • and linefeed (ascii 10, often signals an ‘enter’ keystroke/end of line, either alone or after ascii 13)

[/computer geek]

Another fun aspect to this is simple signaling. All data transmission (except for quantum and optical) is predicated on one system setting a voltage level on a wire and another system reading that voltage level. It is absolutely possible to rig a system where a single wire could have 2, 4, 8… different discrete voltage levels going across it to take the place of 3 lines or three seperate sequential signals to represent those 8 possible values. But signaling from one state to another (say a 0 to an 7) could electrically be very difficult without accidentally triggering an intermediate state (say a 2). Most electrical devices have a built in capacitance that makes them hold and dissipate electrical charge over a period of time rather than being at one voltage and then instantaneously at another. The larger the spread (i.e. the more discrete states one signal can have), the more likely it would be prone to those kinds of errors.

I wonder what James Burke of Connections would have to say about this.

It has been a long time since computer chips used 5V. The process used currently where I work has digital power from 1.05V to 1.3V. The voltage levels on the pins are higher but it has still been many years since anything at all new used 5V except for communicating between devices over somthing like USB.

Some of the flash chips are storing 2 bits per cell by using 4 voltages instead of 2.

The letters are a known range, so it’s not a problem.

It gets rather interesting when you look at quantum computing, where a qubit is both 1 and 0 simultaneously.

Quadrature Phase Shift Keying (QPSK) encodes numbers in 1 of 4 states; likewise 8PSK has 8 numbers to choose from. These techniques are often used for modem comms over an analogue connection.