Bah! Everyone knows the standard extended Boolean is True, False, FileNotFound!
I don’t agree that Boolean Logic is the fundamental barrier to trinary computers. We use Boolean Logic and binary opertations extensively in computers because we use binary signals everywhere in the hardware, not vice versa. I mean, what’s so inherently useful about shifting binary digits in a register (that is, why is it more useful than shifting trinary digits)?
Now you could argue that there are a lot of signals that are only going to be yes/no, so the capability for a third state is somewhat wasted, yes, but there are also a lot of signals (and by volume the vast majority of data) that are more than one bit, so a trinary signal would be more efficient.
The fundamental issue is that the hardware to distinguish three voltage states on a line is a lot more complex than hardware to distinguish two states; so much more complex that it’s much much easier and cheaper to do two binary lines than to do a single trinary line.
I don’t think it’s been mentioned yet, but some memory technologies do indeed store more than one bit per memory cell. They are called multi level cells, and are used in flash memory. Much of the flash produced today uses this technology. This technique is not readily adaptable to the rest of the memory in your computer, I believe.
They are actually called “trits”. Brian Hayes wrote an article in The American Scientist about them a couple years ago. I don’t recall he mentioned anything about speed advantage, though.
The 1950s Univac I (possibly the first commercially available computer–at one million 1950s dollars) was base 10. Each base 10 digit was, however, encoded in four binary bits. They encoded the digit d as d+3 (called “excess three encoding” so that the arithmetic would carry in the encoding exactly when it did in base 10. I’m not sure when they abandoned that.
One advantage of trits is that nothing special has to be done for negative numbers. The number is positive or negative according as its leading non-zero trit (if any) is +1 or -1.
My guess is that in theory a machine that distinguished between multiple voltage levels could encode more information and therefore work faster than the normal bi-state models. However, I strongly suspect that in practise it would be extremely difficult to realise these benefits, which is why major chip manufacturers have not gone down this route.
I thought that is what I was saying. We do use boolean logic because it is simple and useful, and ternary computers would have plenty of binary operations, but there might be just as many useful operation in any base.
As mentioned earlier, data isn’t the issue, its the logic in the processor. For data, 3 states is a minor improvement over 2. Data compression methods, both logical and physical are already possible, and they’ll compress data at a higher rate than 1.5:1.
And that is it exactly. Chips are made out of lots of simple components that only have to work with two states. Making each of those components work reliably with 3 or more states will require an advance in technology, or more components.
Not so fast my friend.
Assuming the structure we are searching is a simple sorted list, you still have to do 2 different compares at 2 different locations to eliminate 2/3 of the list. It can’t be accomplished with 1 compare of 1 value in the list just because the computer has 3 states instead of 2 per bit.
Note: I assume a simple sorted list because that is the most common value-add application of binary search, if we had some other structure to support fast search binary search would not be as valuable.
For reasons mentioned above (and in the sequel) I find the idea of using three states on a single wire (or on a single ferrite memory core) to be an unlikely idea in the quest for speed and simplicity.
Googling “setun ternary computer” and clicking on the 25th result, [PDF] SETUN’s Reverberations (or rather its Google “Quick view”, since the main link didn’t load), I see I was right.
Setun did not squeeze a trit onto a single wire or core, but just the opposite: they used two wires, or two cores to store a trit. (These leaves one of four states unused, but the engineers could take advantage of that state’s non-use to add reliability.)
To actually use tri-valued signals on a single wire would be very inconvenient. Moreover the goals of speed and simplicity are harder to achieve. Digital signals (which were sometimes called “large-value” signal in the Paleozoic era) are often driven up or down ASAP with transistors or resistors but a somewhat daintier approach might be needed with ternary. (Engineers do speak of “tri-state” logic, but that’s different, the three states being 1, 0, and off.)
(On the topic of funny old computers, a CII Honeywell Bull computer used ternary parity checking! Instead of one check bit to give a bit population a fixed base-2 residue, two check bits were used to give the population a fixed base-3 residue! I knew an engineer who wanted to save a bit, by eliminating this “unnecessary” extra check bit, but was foiled by programmers who were packing an extra bit of information in the 33% of cases where two check-bit settings were legal!)
Can someone explain how boolean-style logic would be possible with three-state bits? I can’t get my head around how a meaningful truth table can be constructed for conventional logical functions, based on properly tristate inputs
One possibility is to treat the three states as: True, False, and Unknown. Then Unknown combined logically with anything else, including itself, is Unknown. Also, the negation of Unknown is Unknown.
ETA: More details here.
Boolean algebra is just the finitary algebraic theory of the two-element set; the operations of Boolean algebra are just the finitary functions from the two-element set to itself, and the identities of Boolean algebra are just the identities those satisfy. In other words, every possible “truth table” defines some Boolean operator, and the rules of Boolean algebra are just that two expressions built up out of such operators are guaranteed equal just in case their “truth tables” match up.
But there’s nothing magical about the two-element set. We can do the same thing with any set. We can talk about Trinary algebra, which is just the finitary algebraic theory of the three-element set. Its operations are just the finitary functions from the three-element set to itself; its identities are just the identities those satisfy. In other words, again, we can think of every ternary “truth table” as defining some operator in our ternary logic, with the rules of the logic saying that two expressions built up in whatever fashion are equal just in case their “truth tables” coincide. It’s not quite Boolean algebra, but it’s not so terribly different, either. Boolean algebra, from this point of view, is only special insofar as one considers 2 special.
In other words, as a poster mentioned above, it’s not that we use bits in computer design because Boolean algebra is so useful for computers. It’s that Boolean algebra is so useful for computers, because we use bits in computer design. If we used trits for computers, the useful algebra would be “trinary algebra”, in all the same ways.
That having been said, bits are also useful in life, because many pieces of data in life are such that they take one of two possible values. And because of this, bits/Booleans are a usfeul programming datatype, whether or not they’re what one’s hardware is based around (e.g., “Is this string empty or non-empty?”, “Does this file exist or not exist?”, “Are these integers equal or inequal?”). So no matter what, programmers are going to be using Booleans and Boolean algebra to some extent. But this is largely independent of the question of what the hardware is designed around. Ternary data arises in life as well, even if not quite as ubiquitously (e.g., “Is this integer greater than, equal to, or less than that integer?”, “Is this a letter, digit, or symbol?”, “Is this a red light, yellow light, or green light?”), but programmers are not stymied in their attempts to deal with it by using binary computers.
[Of course, there is a sense in which Boolean algebra and trinary algebra are both able to simulate each other (a bit being a trit which isn’t 2; a trit being a pair of bits which aren’t both 1). In the parlance of categorical logic, Boolean algebra and trinary algebra have different Lawvere theories as categories with finite products, but identical Lawvere theories as categories with finite limits (specifically, both correspond to the category of finite sets).]
Well, not quite. Unknowns are used during simulation, but they don’t really exist in the hardware. (Indeterminate states are not the same as an unknown.) (Internally busses also have floating or Z values, mentioned in the link - that is a real value but only exists in certain structures.)
Gate level simulators and fault simulators handle unknowns. Higher level RTL simulators do not. Unknowns cause all sorts of problems. For instance, a common way of observing the result of a test is using a Multiple Input Signature Register (MISR) which compresses tens of thousands of millions of bits of data into, say, 64. This works off of polynomial division, and having an unknown value entering the MISR during simulation poisons the entire thing. There has been a lot of work done on eliminating unknowns.
Where do they come from? When you power up, memory elements are at an unknown state. They are actually at some known state, maybe even a consistent one, but you don’t know it while doing the simulation. One real example. One circuit we were supposed to create a test for had a ring counter, and some logic which would send an initialization pulse when the counter reached all 1s. This works fine in hardware, since you will eventually get to the state, though you don’t know when. It doesn’t work too well in simulation, since you start the counter off at all Xs and never get out of that state, and so never get an initialization pulse.
BTW, CDC machines, which were one’s complement, had the all 1’s value (-0) as Unknown. Very useful in writing assembly language code for them at times.
As I mentioned no one has designed a processor using five volts for quite a while now.
CPUs run faster internally than you can supply a clock. Therefore, internally they do not use crystal oscillators but Phase Locked Loops (PLLs) to generate a fast clock signal. Don’t expect me to explain them - they are basically analog and I’m allergic to analog.
Look up Serdes. (Serializer/Deserializer). That is the I/O strategy used by most high end chips these days.
It used to be that to improve throughput you routed a bus with, say, 8 bits around your board. However at really high speeds the problem of crosstalk and skew became so great that speed was limited. Serdes I/O uses a single line which is very fast. It is self-clocking also. There is a complex learning protocol involved. With cheaper transistors, you can afford to put lots of decoding hardware on the chip.
Quoth Hari Seldon:
I’m having a hard time picturing this: 9, for instance, would be represented as 1100, and adding one to that would give you 1101, which doesn’t carry over to the next nibble.
Quoth Indistinguishable:
I’ve been learning Fortran77 lately, and was very interested to learn that it has a trinary branching statement (despite running on binary computers). The statement
if a 10 20 30
means “If a is negative, go to line 10. If a is zero, go to line 20. If a is positive, go to line 30.” (where “a”, most often, is a difference of two variables). In fact, I think this was actually the only branching statement in the early versions of Fortran. If (as often happened) a programmer wanted only two branches, he’d have to explicitly point two of those branches to the same line, like (actual example)
if(d10-14.5) 777,777,778
But remember: one is represented as 0100. And adding 1100 to 0100 does indeed carry over.
Of course, to really carry out arithmetic on digits in this encoding, you have to add as though it was in binary, and then cancel out the excess 0011. But at least the carry bits get flagged appropriately by the binary addition in each digit-adder along the way, which was the advantage.
Yes. The question has been clouded by the idea that throughput is increased by sending a trit down a wire instead of a bit, outside of a processing chip. But if that were the strategy, why stop at 3 voltages? There are all sorts of ways to increase bandwith between large scale components, but the problems occur down on the silicon. Someone has to devise practical small scale components that take advantage of 3-state logic, or the data will have to be converted to 2-state form going in and out.
Right, on the lowest hardware level yes.
Of course, in high-level languages, there would still be Boolean variables and logic operators. Computers still need to make TRUE/FALSE decisions, and boolean variables are what are most useful for that. It’s not like any high-level language only uses one bit to store a boolean variable, right? The only difference in a high-level language on a trinary computer would be the bitwise operators, if the language has them. (I mean, we could design a language without Boolean variables, and built-in trinary variables, but only if we were trying to make things difficult).
This similarity between low-level hardware’s use of high/low and Boolean variables and operators in high level languages is I think cause of some people’s impression that binary logic is somehow essential to computers.
Right. I wanted to go back later and edit that post so that it said more specifically “so useful for computer design” in both instances instead, but the edit window ran out on me. But I think I made similar points in the rest of the post anyway.