Computers: why do they still only use binary code?

The reason transistors are used in either on or off states only is that heat dissipation is small.

When a transistor is off it is easy to understand how it consumes no power, when it is fully on the resistance is very small and since

P= I[sup]2[/sup] *R

Where P = power - related to dissipation of heat in this case

I = Current consumed

R = Resistance

If the resistance is kept very small such as when a transistor is turned on fully it follows that less heat will be dissipated,

When a tranistor is between on and off states, in transition, the time it take to do so is directly related to the heat dissipated and the more often transition takes place the higher the temperature.

If you were to use transistors with discrete voltage states between on and off the amount of power consumed would make the thing unfeasible, as it is transition time is one of the limiting factors in chip design.

In reality nobody actually waits for a transistor to be fully turned on or off as it takes time to do this.This means that logical 0 or 1 does not actually mean 0 volts and line volts, once a treshold is achieved this is taken to mean the required state.

If anybody’s interested, a poked around a bit concerning a resurgence of analog-based computing. I’ve got company, it seems (I wasn’t entirely unaware of that, actually).

This is interesting, given my offhand remarks about neural nets:

http://www.eetimes.com/story/OEG19981103S0017

Here’s an academic who seems rather interested in the topic:

http://www.cs.indiana.edu/hyplan/jwmills/MAFC/mafc.html

A bit more out there - there is a guy named Paul Saffo, a visionary type who’s current thing is “intelligent materials”. Scan down to “Analog, the new frontier” or something like that in this article:

http://www.uta.edu/huma/enculturation/2_2/rieder

Check out http://developer.intel.com/design/flcomp/isfbgrnd.htm for intel’s flash memory that stores two or three bits per cell

pweeman said “What kind of gates would you have? Are they just extensions of AND NAND etc?”

Ah geeze, you would ask me that. I did the exercise 28 years ago when I was fresh out of college.
As I recall though, I started with the 3 state equivalent of the binary truth table.
The binary far right column (A) was 0101,
the column to the left (B)was 0011.
The weight for A was 2^0 or 1.
The weight for B was 2^1 or 2.
A 0101
b 0011
The AND function would be 1 & 1 = 1
All other combos = 0

The 3 state equivalent would be:
A 012012012 weight 3^0 = 1
B 000111222 weight 3^1 = 3
The AND function would be:
1&1 = 1
2&2 = 2
All other combos = 0
(The C column - not used here - would have nine 0s, then nine 1s, then nine 2s, weight = 3^2 = 9)
OR gates are similar. NOT functions are comparable
The fun part is extending the concept to flip-flops and then designing a 0 to 10 counter.

Like my favorite instructor used to say “That’s obvious to the most casual observer and is left as an exercise for you students”
How’s that for a cop-out?

No, it was because that was the simplest coding.

Actually, Babbage’s machines, which preceded ENIAC by about a century, were designed around base 10. We certainly could have base ten computers, but why? What’s so “good” about it? The only reason that we use it is because we have ten fingers.

How is it inefficient?

On the contrary, it’s quite simple. People learn digital calculation at a very young age, and binary at a much older age, if at all. Because of this, digital sems simpler than binary, but it’s not. Do you know what the multiplication table looks like in binary?
0 0
0 1
Now compare that to the digital one. Not quite as simple, is it? I could probably teach a five year old binary arithmatic in less than an hour.

I think abetter analogy would be between building a house with normal size Legos (binary), or Legos the size of bricks (digital). Normal Legos give you much more flexibility.

Some computers do use “decimal” logic. Mainframes (e.g. IBM 360 and others) use BCD or Binary Coded Decimal numbers for some calculations. It’s just slower than binary.

Binary has a advantages. It’s incredibly cheap, it’s fast (reducing logic to and/or/nand/nor gates and flip-flops are things that electrons can travel along real good), and it reduces everything to the simplest possible representation.

Oddly, Digital isn’t digital at all, it’s binary.

(Sigh) I guess nobody remembers Core anymore.

tbea925, BCD is binary just like ASCII is binary.

I think what tbea925 was referring to is that some mainframes of that vintage had what was known as a “decimal accumulator” that performed arithmetic on collections of BCD digits (hex values F0 - F9, why the HELL do I remember that!). The machine I learned assembler on was a Xerox Sigma, which had decimal accumulator instructions as well as “normal” arithmetic. Those instructions treated some block of the 16 general purpose registers as the decimal accumulator, and let you add memory containing numbers expressed as EBCDIC digits to it, and so on. Never made use of them.

sailor is correct in noting that this is still manipulation of binary patterns, but it is worth noting that the arithmetic is not done by the machine actually manipulating binary representations of the numbers as we are accustomed to in most arithmetic operations today.

yabob, and Diver Thanks for the info. I think I’ve ruptured a few brain cells trying to figure out what you guys said.

jeez, I missed out on this thread, and as always, I look like a crabby curmudgeon when I jump into a thread quite late and (as usual) point out that you guys are all barking up the wrong tree. This n-state logic and quantum computing is all blue-sky thinking. Analog computing has been around for many years, and at least one major manufacturer that I know of produced these machines in quantity.

Hewlett-Packard used to make a series of analog computers in the 1960s. They used normal binary logic for most control functions, but computationally intensive functions like trigonometry, square roots, etc, were programmed into analog computing circuits. The hard work was all done in analog, then read by high-resolution (VERY hi-rez) analog to digital converters and output by the digital side. These machines were used in scientific computing where fast, high precision number crunching was necessary. At that time, binary computers were slow, and analog circuits could do instantly what would take many hundreds of CPU cycles.

So yes, there are advantages to analog computing. Occasionally, people trot out this old technology and proclaim it is a new invention. It always surprises me that nobody remembers the old analog machines.

Chas writes:

I don’t think this one’s gone stale yet.

Well, quantum computers exist today, so I’m not sure how blue-sky that is…

yabob’s already pointed that out. The problems with analog computing for the future are many. First, the technologies to dive this down the price/size/power/performance curves are not well advanced. Even with today’s most advanced analog technologies an analog computer capable of similar complex instructions (to compete with a 486) would fill a small house.

That was then and this is now. While analog computing is certainly still used for some applications, it does have it’s technological limits. Most analog computers use a -10V to +10V swing. In the real world, voltage can’t swing from rail to rail instantaneously, plus the reactive components in the system require some settling time. The bottom line is that there are no computations on analog computers that can’t be done faster with the near gigahertz binary machines of today.

Somebody remembers 'em. I used to use them and I gather yabob has more than a casual exposure to them.

Actually, my exposure IS casual - just from what I’ve read / heard, etc. Never worked with one. I had professors who had. I just bring up the technology as something that bears reinvestigation, particularly with reference to things like neural nets, as mentioned.

2 points - it’s exactly because the development curve is NOT advanced that I think there is potential. And since I don’t see analog replacing digital, I wouldn’t even try to build a machine with similar complex instructions to an x86 machine.

The modern analog machine probably would have quite a bit of digital support and program control, and would be used for the AI / real world sensor applications we’ve been discussing, not standard data processing applications. Perhaps analog computing could become something like a fancy coprocessor that the digital chip could control and configure the elements of, and allow it to do the real gruntwork involved in your neural net or simulation.

That isn’t necessarily so. For example HP used analog circuitry in some of its early desktop scientific calculators, I used to play with one that my neighbor owned, way back in the late '60s/early 1970s. They were astonishingly expensive, something like $15k, but there were people willing to pay that price, because the only alternative solution was to use the old iterated succesive approximation for functions like square root, which took damn forever on the existing ±/* calculators. The analog circuitry allowed you to just poke one button and get an answer instantly, instead of spending several minutes doing repeated cycles of calculation to get an approximation. The calc had several analog boards inside, it was a huge beast, about a foot wide, a foot tall, and 2 feet deep. It had a cool oscilloscope display too, a true vintage computing device. So it isn’t THAT expensive to do analog computing, particularly if you factor in the labor involved in alternative methods.

Sometimes speed and cost isn’t the issue. For example, I recently read about an analog computing device that detects radioactive decay particles to generate truly random numbers and then output them through an A to D converter. No binary machine can generate true random numbers, only pseudorandom numbers.

But anyway, this analog vs. digital discussion reminds me of the annual Soroban (abacus) competition in Japan. Every year, they give a series of calculations to two teams, one with the abacus, one with pocket calculators, and see who can complete the calculation fastest. Every year, the Soroban team wins. One year, the calculator team beat the Soroban team for speed, but was disqualified because they got the wrong answer…

yabob:

It’s not for lack of trying, that analog is not as advanced as digital in terms of speed, power, size, etc… There are large teams of engineers cranking away at it each year and there are far more patents for analog-ish things than digital ones. Not that many people are working on analog multipliers, I’ll grant you, but the concepts are not that dissimilar for the analog features that we continually integrate onto microcontroller devices (those special microcomputers that control your ABS, air bags, and engine in your car, and are used for thousands and thousands of everyday devices). Simple physics is the primary bottleneck.

Chas:

True, for certain specific functions, you can build analog computers that are not TOO big or TOO costly, however the point was that for most applications, analog methods are not a suitable replacement for digital ones. I know that you and yabob are talking about hybrids, but I’m not sure there are many applications that could actually benefit from analog features.

What you’re really talking about is a sensor. I’d hardly classify this as an analog computing device unless it does more than what you describe. The analogy that I instantly draw is with a microphone. If a microphone is exposed to a reasonable level of random noise, by playing with the sensitivity you can accomplish the same random number generation task. The only real difference here is that you can nearly always count on radioactive decay, but you can’t always count on audio noise, bubt that’s not a function of the computer - it’s a function of the sensor.

Analog computers are still the architecture of choice for many servo applications, but digital is starting to take over…