Computers: why do they still only use binary code?

Forgive me if this sounds naive, but he’s a Q for all you computer geeks:

Everyone knows that computers read binary (base 2) code. I assume (…and maybe I’m wrong here…) this was done because primative computers only has the ability to “read” and/or “write” ones and zeros (ons and offs, as it were).

But, hell, that was fifty years ago. Has anyone invented the technology that would enable a computer to use “tri-nary” (base 3) code? Or how about good ol’ decimal (base 10) code?

Binary seems very simple, very elegant, and probably pretty foolproof – but also inefficient (storage-wise) and cumbersome (calculation-wise). I assume it’s sort of like building a house with Legos – sure it can be done, but wouldn’t real bricks work better?

Now, before everyone chimes in about the cost, difficulty and undesirability of replacing an entrenched, accepted technology, be warned that that is not what this Q is all about. I just want to know if there are substantial benefits of using a higher-base code, and if anyone has ever tried it. Thanks one and all.

Maybe someone else can help fill in details, but IIRC, there are some physicists working on the concept of quantum computing. Instead of ones ond zeros, the computer would identify one of 32(?) possible spin states of electrons used to carry info. I may be a bit off-base with the details here, but the upshot is that a quantum computer would be exponentially more powerful than a binary computer.

Hello all!

individual spots in memory, called bits, are either in a state of being on, or off. Hence, 0 or 1. I suppose they could be in a state of neutral, negative or positive charge, which would increase the amount of info that could be stored in a given ram chip, but computers would need to be redesigned radically.(bigtime).
This endeth… The lesson.

A computer’s hardware consists of transistors, which are bi-state: On or Off. This is why binary was adopted. It is the perfect system for such hardware.
Some computers and operating systems also use base 8 and 16 (octal and hexadecimal) since a physical byte contains 8 bits (registers for and On or Off).
It is faster and more efficient to use a binary code with low level hardware. It bridges the gap between the hardware and software.

The short answer is no there is not a lot of benefit to going to base 10. It is easy to make circuits that are on or off but much harder to make circuits that have more than two states on, sort of on and off.

Computer programmers for the most part don’t really wory about base two they let the compilier wory about it.

Computer designers worry about it less and less as we now have synthesizer to make the base two stuff for us. The people who design gates that the synthesizers use still worry about base two but they only design 1 each of the legos and the synthesizer builds the house from the plans that the computer desigers give them. How about that set of goofy analogies.

A while back Intel said that they had flash memory that stored more than just on and off. They were getting four states per cell. oon, sort of on, sort of off and off. this would have allowed them to double that amount of memory stored in the same amount of space. that was a few years ago and I haven’t heard anything about it so they must not have been ably to make it cheap enough.

Besides that base 10 stuff is old it has been around for thousands of years well at least hundreds. When did the arab math guys discover zero anyway?

I think there are two key (although entirely distinct) things you need to make a computer. You need to be able to store information and you need to manipulate it. Binary is fairly simple on both these accounts (flip-flops, logic gates) from the logic gates you can devise methods for higher level operations and data extraction.

There are methods of storage of information in more than two states such as has previously mentioned and optical storage. I beleive the main difficulty is in developing higher than two state logic and data manipulation. There are people working on this. Optical computing seems to hold some promise in this area.

I think though, there has not been more research in this area, because there is still a lot of room for binary computers to grow. Once their theoretical wall is hit you will see more research in these other areas.

as an aside,

Quantum computing is another area of intense research right now (I attended a seminar by one of Stephen Hawking’s old grad students on this). Much more difficult. The number of states available depends on the physical system used. The electron only has two spin basis states (it is a spin 1/2 particle) but I suppose you could find a molecule with 32 different basis states. Each unit is know as a qbit (quantum bit). For an electron by itself you have a two state qbit, for the molecule you have a 32 state qbit. I have heard that they have been able to make a 3qbit computer using a Magenetic Resonance Machine (basis of MRI).

This is when it starts to get really weird! Not only do you have the basis states of the qbit availale to you, but you also have every linear combination of them (recall Schroedinger’s cat). Although you only have logic on the number of basis states of the qbit, all these combinations can be operated on simultaneously! Known as “massively parallel computing”.

In addition to this parallel ability, you can do logic operations which are impossible in classical computers. For instance I beleive you can make a gate which can do the square root of -1.

The main challenges with quantum computing are:

  1. Building a system that is stable (the Physicist’s job) using techniques like quantum error correction and quantum teleportation.
  2. Designing logic and algorithms that a quantum computer can use ( the Mathematician’s job). For instance Shor’s Factoring Lemma theoretically allows you to factor large numbers in minutes on a quantum computer what is would take years on a classical computer

As has been stated, it’s much easier to tell the difference between about 0 volts and about 3 volts or 5 volts or whatever the voltage that represents an on state is, than it is to divide the voltage up into several levels and detect them. That’s even more true now than in the past because the high speeds today have caused the noise margins available to decrease. Reflections due to impedence mismatches and noise due to ground bounce are part of the problem. PC boards nowadays have to be designed as waveguides to help aleviate the problem.

As an exercise I once assumed base 3 logic, devised the truth tables and designed some counters and other circuitry. It reduced the number of components needed to construct basic logic items such as counters, but at the expense of circuit complexity.
So it’s very possible to use multi-state logic and there are some advantages to doing so, but the disadvantages still outweigh the advantages (I think).

As far as traditional, transistor-based computing goes, binary is the only way to go. Sure, you could design a 3-state, 4-state, or n-state machine (it has already been done). And mathematicians tell us a 2-state system is inherently inefficient. But an n-based system where n does not equal 2 won’t work very well.

Why?

Because of the way states are represented in the circuitry. Let me explain…

In most logic circuits, ones and zeros are represented by voltage levels. For example, in TTL technology a “1” is represented by 5V and a “0” is represented by 0 V. (Actually, a “0” is represented by a voltage less than 0.7V, and a 1 is represented by a voltage between 2V and 5V, but I’m trying to keep this simple.)

Now let’s say you want to design a 3-state system. So you do the following:

A “0” is represented by 0V.
A “1” is represented by 2.5V.
A “2” is represented by 5V.

And you design and build the system. And you turn it on. And it doesn’t work.

Why?

Because you’ve got a big problem when you’re trying to go directly from “0” to “2” and “2” to “0”.

When you go from “0” to “2” the voltage is continuously changing from 0V to 5V, thus it passes through 2.5V. So it will look like you’re trying to send a “0” “1” “2”! (It is impossible to go from 0V to 5 V without going through every voltage in-between.)

And you’ve got the same problem going from “2” to “0”.

The only way around this problem is to do one of the following:

  1. Implement a global clock.
  2. Implement clocks on each logic gate.

Neither solution is desirable. Many design operate asynchronously, and therefore do not have a global clock. And the second solution would be terribly complex.

lask wrote:

Just to clear up one point, transistors aren’t inherently two-state devices. They were designed to be linear amplifiers, and do a very good job of that in the area between where they first turn on and where they get saturated. In binary systems, it’s just that they’re designed to operate at one of the two rails.

It’s fairly easy to design systems to drive the transistor either all the way on or all the way off. For a higher-state system, you’d have to be careful that when you want one of the middle states, that’s what you’ll get. My hunch is that the extra complexity in controlling this is not worth the savings in fewer devices.

The OP is a good question, at least I think so because I’ve thought about it many times and I’m a geek. By the way, I hear people all the time use the word “digital” to refer to something that’s either on or off, when they should be using “binary”.

Correct. And it’s interesting how this whole discussion has revolved around “digital N state” vs. binary. As a mathematician turned software guy (years ago), I will defer to the hardware types on why there are or are not practical design reasons for binary computing devices vs. ones using base 3, 4, 10, 60 or 666 digital circuitry.

However, I sometimes DO wonder if more people shouldn’t be revisiting analog computing. We basically gave that the heave-ho decades ago, except for specialized applications, mostly because binary was faster and didn’t present calibration problems. It seems to me that a hybrid analog / digital environment might be better for a large class of applications.

So I guess I was wrong in assuming (at least mathematically) that it is difficult to make higher than two state logic.
Diver, What kind of gates would you have? Are they just extensions of AND NAND etc? I kind of remember the truth tables so you can show me in that form if you wish.
What’s the highers that are physically realized?

yabob, I’d be interested to hear more about analog computing.

Crafter’s hit the nail on the head… at least, the big nail. The hardware problem is one of “level versus synchronization”. Multi-state logic is feasible and, in fact, has been demonstrated on numerous occasions, however it does not tend to be practical for a variety of reasons. Fundamentally, it comes down to cost. It’s still cheaper to build binary computers that evolve basically according to Moore’s law, than it is to develop a whole new technology and swing the entire industry over to this new standard.

There are opportunities for change, however, particularly in the arena of optical computers because you can use wavelengths to represent states and it’s possible to toggle between wavelengths (colors) without inadvertantly hitting intermediate states. Optical gates are of constant complexity, so it doesn’t cost you much more to support a wider ‘variety’ of states. Also, holographic memories would be able to store data in non binary formats, which I think is key to facilitating the change to non-binary machines.

The question I ponder is, asside from density improvements, are there any other benefits / implications to non binary computing? I think it’s possible that wildly new architectures (non von Neuman) will be possible. Artificial neural nets are an example that comes to mind. Today these are mostly emulated - if there was a native hardware infrastructure that supported the architecture, this might open up a whole new universe to computing…

Neural nets might be one of the prime examples I would contemplate analog architectures for. I should have also mentioned the Von Neumann architecture as something else that hastened the shift to digital machines - another problem with analog was that “programming” was too much of an exercise in hardware design. It’s for this reason I think you would want to go to a hybrid analog/digital design : you still need digital support functionality to control and configure the analog computing elements.

**** background ****

(not really my field, but since I shot off my mouth, I’ll
give it a try)

Analog computers are devices which manipulate data represented by continous quantities rather than discrete states. It might, for instance, add two numbers together by just combining the quantities in a manner crudely analogous to pouring two glasses of water together and seeing how much water you have now. For general purpose computing, digital took over because it operated much faster, wasn’t prone to minor changes in voltage levels and so on causing “drift” in the represented data, and control logic could itself be represented as digital data so that programming could become purely a software exercise. And processing of floating point numbers became something that was really still a manipulation of discrete digital patterns internally.

Analog continued to be used in special purpose things like system simulations and signal processing applications. These are things involving HEAVY floating point - complex interactions of continous variables, and you may be worried about the effect of quantizing “real world” measurements or probability distributions into discrete levels.

You have a better idea?

first, tri-state componants are routinely used on buses. the states are on, off, and high-impedance. this is kinda-almost non-binary logic, but not really really.
second, one advantage of binary leaps to mind: low power dissipation. with cmos circuitry (and i’m sure more modern varients), the logic circuits can have zero quiescent current, all the power dissipation comes when you clock the circuit. I don’t think you could build a ternary logic circuit with that property unless piped in a second voltage supply for the third state.
third, there is also a big advantage with DRAM. in DRAM, you effectively charge a tiny capacitor to indicate a 1, leave it uncharged for a 0. over the following microseconds, the capacitor will be discharging, but it can drop to, oh say, 1/3 Vcc before it gets called a zero. So you have to recharge it before it drops that far, say every 250 microseconds(don’t know real numbers). If you have a middle state initially charged to 1/2 Vcc, it would drop to 1/6 at the end of the rechrge cycle. so now to determine the state you have to know how long ago the last recharge was, either that or make the recharges more frequent.
maybe you could make a DRAM cell that could hold 3 levels without getting them mixed up, but it would probably take up more chip space than just building two binary cells.

just a few thoughts, it’s really not my field…
-luckie


mp3.com/InSyzygy
mp3.com/RobRyland

Hey Sailor, just for the record I thought your snide post was uncalled for. Apparently the 10+ other posters who really seem to know about this stuff did not find my OP as absurd as you did.

Even the Dopers who poo-pooed non-binary systems did so with a gracious explanation and an open mind. Others, who thought the concept might have promise sound rightly cautious, but also open-minded. (I’m grateful to them all – and I bet other Dopers who found this thread worth reading are too.) You, however, brought nothing to the party except your bucket of cold water.

You don’t have to move away from binary systems to get other options.

There has been considerable research into optical bistable elements - effectively transistors which operate with laser rather than electrical inputs. The advantage here is that the ‘optical transistors’ can be made massively parallel, so that a matrix of hundreds of switches can be made on a single slice of crystal, all of which can operate independently.

Here is an discussion of Optical Logic Arrays

In theory, this could mean that absurdly fast processors could be constructed.

In practise, it doesn’t seem to have happened. I guess it’s like the notion of finding something better than wheels for cars - it could be done a different way, but it’s too much effort for not enough gain?

Russell

Russell:

There are other advantages, as well. Optical computers consume much less power, which will in turn, make them more reliable. Also optical systems would be much easier to reconfigure (no soldering) and test (no contact).

Have a little patience. Until recently, the problem has been how to make laser diodes with the right wavelengths, size, and quality, not to mention low cost (oops, just mentioned it). This hurdle has been overcome and I suspect we will see considerable more progress in the near future.

stuyguy, my comment was not intended as snide but rather as humorous and it had a point which is that sometimes simple things are the best. Your OP makes it sound like binary is some backward system and there are better ones available.

There are cases where this is true (for example, some modem modulations send more than one bit per sample) and in those cases it is used. But in the end binary is the best system invented for digital logic, precisely because it is simple.

Sorry if you didn’t like my remark. it was not meant that way.

Sailor: Thanks for the gentlemanly reply. And consider the matter forgotten.