A ternary computer?

If you designed a machine that uses no charge, half charge and full charge as three different possible bits, could you use the same resources you use to build a normal binary computer in order to build a faster ternary computer with more memory?

It seems initially plausible because arithmetic operations would deal with fewer digits (and so would undergo fewer individual calculations, right?) and a single address in memory could store three possible values instead of just two, making the storage more information-efficient (right?).

But for all I know, there are engineering problems that make a three-bit computer no more efficient–or perhaps even less efficient–than a two-bit one, perhaps having to do with the hardware you’d need to design to deal with the distinction between half and full charges, iunno, or somethin.

ETA: Relevant but not very informative wikipedia article found here

This is just a hunch, but I don’t really see how a ternary computer would be necessarily faster than a computer that operates using two “parallel” bits - that is, a “quarnery” computer, although ternary may have advantages wrt materials (compactness and power consumption).

It’s not exactly the OP’s question, but it is not uncommon to encode multiple bits on a single wire using different voltage levels…or two bits using 4 phase angles.

The reason binary is used is because two is the smallest number of states that can encode any information at all.

There’s nothing magic about the number 2. You can build a digital computer using any base you want, but using base 2 makes a lot of things very simple. The Soviets had at least one functional base-3 machine that used -1, 0 and 1 as its bits (tits? :smiley: ) and it saw limited use. Designing hardware for base 2 is just a lot easier – electrical engineers understand the concept of an on/off switch very well, and that is conveniently how transistors work.

Analog computers have of course been around since antiquity (think sliderules or even differential analysers).

From an engineering view, this would require your circuitry to accurately distinguish 3 levels of charge vs. 2 in binary computers. This would require your circuitry t be more precise, and it would be more susceptible to degradation – so it would probably have to run slower.

Remember that Seymour Cray designed his supercomputers with matching circuitry, so that when one bit went high another bit went low. This made them more stable, thus letting him run them at faster speeds without errors. That principle would still apply today.

The biggest reason is that with binary, you only have to worry about one threshold, and thresholds can be bad. Like, with binary, you might use the convention that a nominal 0 is 0.0 volts and a nominal 1 is 1.0 volt. In other words, everything above 0.5V is a 1, and everything below 0.5V is a 0. That’s fine as long as your voltages stay close to their nominal values, but there’s always some error in everything. So you might have some “zeros” that are actually 0.1V, and some “ones” that are actually 0.9V. If you’re really unlucky, you might have a “zero” that’s 0.49V, which is thus at risk of accidentally turning into a “one”. So you try to stay away from voltages near the middle to prevent that happening.

Now, you could divide your voltage up into more digits, if you’d like, but that means that you don’t have as much room to fit your voltages into the “safe” areas away from the thresholds. So it becomes more likely that you’ll accidentally change some values.

EDIT: Pretty much the same thing as t-bonham just said.

There used to be a lot of research in Multiple Valued Logic - there was even a group in the IEEE Computer Society working on it. I don’t know if there is still activity or not.

As mentioned, the big practical problem is differentiating values, which has gotten a lot worse since there is a major drive to cut voltages to save power and improve speed. In the good old days of TTL a 1 was 5 volts; now it is more like 1. Also, real waveforms are not very clean. Now if you overshoot 1 volt there is no problem, but with a ternary system you can easily spike into the wrong value. You’d probably wind up slowing down the clock enough to make the whole thing pointless.

I admit I’m biased, since I am the world’s leading (and only) authority on Base 1 arithmetic. If you know your information theory, you’ll realize that Base 1 computers are the ultimate green machines, absolutely minimizing power consumption.

The idea of trinary computers gets discussed quite a bit in various books for the rpg Mage The Ascension (Specificaly, in The Digital Web and I’m guessing Tradition Book- Virtual Adepts). The three states are on, off, and negative on. The claim is made that the third state allows for values of maybe in addition to yes and no and that this makes for a vastly superior machine.

The Digital Web also taught me about Kibo. I’ll stick with the Great Old Ones. But Kibo is pretty neat too.

The bits on a CD are either on or off. Let’s put that aside for now, and instead, let’s talk about the flow of data as it is transmitted across a wire, or a light beam, or whatever. I recall learning that these bits are NOT off and on, but rather they are – as suggested by the OP – low and high.

The reason for this is very simple: If “0” were truly “off”, how would one distinguish between “0” and “00”? The only way to do it would be with some very good timing mechanism, and so many resources would be spent on that, that it just wouldn’t be worth it. Instead, there’s no timing mechanism at all, and the system simply waits for the next beep. Whenever it arrives, all you need to do is see if it is low or high.

By comparison, people think that Morse Code is binary, but that is wrong. Morse is not just dots and dashes; it is also short pauses between the letters, and long pauses between the words. And it is precisely because of those pauses that the letters of the Morse alphabet can have a variable number of bits, while computers have a fixed standard number of bits per byte.

I don’t think that’s quite the reason. First, it’s certainly possible to have a variable-length bit encoding. The Huffman coding is one example.

But mainly, in a register machine (which is what our computers are) with N bits per register, where N is hard-wired in silicon, you’ll want the CPU to read and write N bits at a time, or in chunks that divide evenly into N. You have to lay down a fixed number of lines in your data bus anyway. But you also get the advantage of reading and writing multiple bits in parallel.

I’d imagine a computer that read and wrote arbitrary-sized streams of bits from arbitrary points in memory would be much slower than what we have today, and just wouldn’t be worth the cost, even if we gained maximum information density as a compensation.

That would be the Setun, probably.

It’s not on that Wikipedia page, but I recall reading somewhere that the Setun had a shortcoming: it actually used two binary bits to represent each tit — trit! — and so used many more logic gates than it theoretically needed.

One possible nuisance with ternary computers in general is that the fractional powers of 2 (1/2, 1/4, etc.) have non-terminating representations. In a ternary floating-point number system, those fractions could only be approximated.

[quote=“Keeve, post:9, topic:560083”]

The reason for this is very simple: If “0” were truly “off”, how would one distinguish between “0” and “00”? The only way to do it would be with some very good timing mechanism, and so many resources would be spent on that, that it just wouldn’t be worth it. Instead, there’s no timing mechanism at all, and the system simply waits for the next beep. Whenever it arrives, all you need to do is see if it is low or high./QUOTE]I think you’re confusing a self-clocking signal with some other ideas about data encoding.

Fundamentally, the base you store your data (or code) in does not in any way affect the nature of computation that you can do. There are potential engineering gains that could arise from using different bases, but as long as we’re using electrical circuits, they probably won’t overcome the benefits of using binary.

Yeah, quite possible. Thanks.

IIRC, the actual range is 0.0-1.2 volts for zero, and 3.8-5.0 volts for one. Anything in between is treated as a spurious signal. I don’t know why there’s such a wide range in the middle, but I assume that there’s a good reason for it. Encoding more digits would require you to either make the intermediate ranges much smaller or to put in a lot more than five volts. The reason why we don’t do the latter is that not melting the circuitry becomes very difficult.

You may want to spend some time reading about CPU design. Crystal oscillators, which do exactly that kind of timing, have been around since the 1920s. To the best of my knowledge, they’ve been used in quite literally every digital computer ever built.

The problem now lies in the logic. There just aren’t that many useful ternary logic operations. For memory, data compression techniques are already coming into play because the time to compress/decompress has been greatly reduced. But once you are at the chip level, what else do you do with the extra state? There are a lot more AND, OR, NOT type operations going on than arithmetic, and those operate very efficiently in binary. For many operations, the number of transistors would increase to perform operations like shifts or testing a single bit. And how would a ternary search work? Binary is the language of logic.

Star Trek also had trinary based computers.

For shame, Der Trihs, for not mentioning Schlock Mercenary, too.

I don’t understand what you mean by “efficiently”, but no three-element set can be made into a Boolean algebra. That means that figuring out how to design three-state digital circuits becomes a lot more interesting.

Almost exactly like binary search, except that you knock out two-thirds of the array at every iteration.

The choice to use binary search or ternary search in a program seems to me to have very little to do with whether or not your hardware works fundamentally with bits or trits. I mean, I can see that there is some small connection, but not to the extent that it would become an overriding impediment if you really wanted to do binary search on a ternary machine or vice versa.

It can be done, but you need either a new technology, or more silicon. A shift is done now through logical ‘rewiring’ of bits. A one binary shift in ternary logic has to change the state of all the trits for shifts distances not divisable by 3. But maybe I’ve been thinking in boolean for way too long. I suppose if we were tri-symetrical organisms, we might think in ternary logic already, and would have built hardware to match.

Sorry, you’ve got do just as many operations in a ternary search as a binary one. The point is that you’re doing a lot of over/under decisions, and the third state becomes useless. Parallel processing works faster, but in multiples of 2.

Also, most gates are three state devices, but the third state is ‘invisible’. Its how lots of components can be connected to the same wire, with only one of them (or a subset) active. There is a kind of ternary logic there.