As I think about this I come up with more and more reasons why it isn’t being done. One is speed. Say we have 0,1,2 values, which are three areas in the voltage range.
A 0->1 and 1 -> 2 transition would be relatively fast - a 0 -> 2 transition would have to go through a larger voltage range, and so would be slower. Timing analysis would have to be done assuming this worst case, so the clocks would have to slowed down to support this. We also have two indeterminate voltage reasons instead of 1. So, any speed advantages you’d get from fewer gates would be lost to a slower clock. You would also take a power hit, partially because of the bigger voltage swing. It is a problem now when everything switches at once; everything going from 0 -> 2 at once would be a real problem.
2 element Boolean algebras are just one class of them - there are many others. I think there are 4 element Boolean algebras.
The Wiki entry on Boolean Algebra makes it sound like there are only 2 element ones, the Wolfram article looks a bit more general, if a bit more complex.
It’s a little bit buried, but the Wikipedia article does mention that there are Boolean algebras with 2[sup]n[/sup] elements for every n > 0. Furthermore, those are the only ones; there’s no Boolean algebra with three elements.
I am aware that there are Boolean algebras with more than 2 elements. But, still, the theory of Boolean algebras is specifically the one whose operations are precisely (in correspondence with, up to equivalence) the finitary functions on the 2-element set, and whose identities/equivalences are precisely the ones satisfied by those. Naturally, there are other structures which happen to model this same algebraic theory; for any algebraic structure X, any power of X models the same theory, for instance (thus giving us Boolean algebras with cardinality 2^n for any n). But all the same, the theory of Boolean algebras is fundamentally connected to the 2-element set, and can be defined as the (maximal, in some sense) finitary algebraic theory of the 2-element set, even though this theory will of course have more than one model, as any algebraic theory does.
(Incidentally, there are Boolean algebras of cardinalities not of the form 2^n, but these must be infinite.)
You are right, but the casual reader could be excused for thinking there were only 2. I learned about 2^n Boolean algebras in college, and never have used them in the 40 years since.
(Fundamentally, what allows this to happen is that Boolean algebra is defined only by reference to the finitary operations on the 2-element set. If one defines the stronger analogous notion using infinitary operations on the 2-element set, one gets the theory of complete atomic Boolean algebras; equivalently, the theory whose models are sets, and such that a homomorphism from A to B is a function from B to A, the contravariant equivalence known as the Lindenbaum-Tarski theorem; this duality generalizes to intuitionistic logic when 2 is replaced by the appropriate set of truth values, as shown by Robert Pare and famously used to construct the finite colimits in an elementary topos from the rest of its structure in a suitably minimal definition.)
Anyway, the finite Boolean algebras are all ones you are all already familiar with: the structures whose elements are strings of n bits (for some fixed n), with operations all computed bitwise. E.g., the structure of bitwise operations on 32 bit words is the (essentially unique) Boolean algebra of cardinality 2^32. Though Voyager claims never to have used 2^n Boolean algebras since 40 years ago, one suspects he has in fact used them in this guise. Of course, all it amounts to is using the 2-element Boolean algebra n times in parallel, so, you know, it’s a distinction without much difference.
An example of an infinite Boolean algebra of cardinality not of the form 2^n is the Boolean algebra whose elements are countably infinite strings of bits which turn constant beyond some point, with operations again computed bitwise. This has countably infinite cardinality, which is, of course, not of the form 2^n. Other examples are along similar lines; e.g., the algebra of computably decidable subsets of the natural numbers, with operations computed bitwise on their characteristic functions.
Er, I lost the punchline above by the way I phrased the Lindenbaum-Tarski theorem. The point is, any complete atomic Boolean algebra is indeed some power of the 2-element Boolean algebra; that is, it consists of strings of bits of some fixed (possibly infinite) size, operated on bitwise. The missing infinitary identities ensuring this above and beyond the laws of general Boolean algebra are the highly generalized laws of excluded middle “For any variables p1, …, pk, the disjunction, over all bitstrings b1, …, bk, of (p1 XOR b1) & (p2 XOR b2) & … (pk XOR bk) is equal to TRUE”, for infinite k. The ability to construct Boolean algebras failing this condition is in some sense the key to being able to use forcing to “add new bitstrings to the world”, as in the proof of the relative consistency of the negation of the continuum hypothesis with most natural formulations of set theory.
But, anyway. That’s all getting rather a bit away from the simple applications in CS.
Ah, much funnier!
It’s all in the timing.
Jan Lukasiewicz did a lot of pioneering work on non-bivalent logics. See here and here, for instance.
Actually, no. That was the Ferranti Mk 1.
Right. You’re just showing off your elementary knowledge of Boolean algebras.
Yeah, you’re right, sorry about that. I started out just wanting to mention that you could have a Boolean algebra whose cardinality wasn’t 2^n, but I got carried away into essentially just showing off, as you note.
I’m a gate level kind of guy, so I think of bitwise operations being done one bit at a time, since that is the way they show up in the netlist.
No it isn’t, but not in the typical way. There are only two states: sound or no sound. Let me show you a chart:
dot 1
dash 11
letterspace 0
wordspace 00
You could encode a telegraph message entirely in binary without any conversion, although the dash and wordspace might be a bit longer than necessary.
How would you distinguish between two dots and dash?
Presumably, what it really is is something like:
dot: 10
dash: 110
space: 0
Which is prefix free. Then you use one “space” to indicate a letter space and two "space"s to indicate a word space (never needing to indicate two letter spaces in a row).
[Disclaimer: I know nothing about Morse code]
Yes, I made a mistake. Thanks for catching it, Indistinguishable.
That means letterspace == 00 and wordspace == 000. But it still works. It was something I discovered while reading another thread that tried to figure out what base Morse code was in. When I figured out it was base 4, I realized it could be simplified to base 2.
Since Indistinguishable’s code is prefix-free, the codes for spaces would be “0” and “00”. (As he says above.)
However, you don’t even need “letterspaces” anymore — if that word means “inter-letter spaces”. (And if it doesn’t, ignore this paragraph.) The prefix-free property of the code means that the letter boundaries are implicit. You get that feature for free.
So, I’d use “0” to mean “inter-word space”, and “00” to mean “inter-sentence space”, assuming Morse conventionally distinguishes those two things.
Are you saying that this encoding shows that Morse Code is binary?
It doesn’t show that–you can encode anything in binary, but this doesn’t make anything any more or less “really” binary.
In other words, if you think you’ve got an idea what things are binary and what things aren’t, and you’ve showed that Morse Code goes in the “binary” pile, I think you’re wrong.
But I may be wrong about what I think you think.
It seems to me you’d still need letter spaces, since a letter generally consists of a whole string of dots and dashes, rather than just a single dot or dash. You want to know where one string of dots and dashes ends and the next begins.