Are there alternatives to binary data storage, and is there any appeal of alternatives

A representation that allows one of S symbols will carry log(S) information. If the cost is proportional to S then S=2 and S=4 are tied for 2nd-place among integers for efficiency. S=3 is almost 6% more efficient than S=2. (And impossible(?) S=2.71828 is slightly more efficient still.)

BUT, that assumes that cost is proportional to S. This is clearly true in some simple cases (for example, a station that transmits a single constant-power sine wave will have S proportional to its FCC-allotted bandwith) but is it true in many important examples?

A flip-flop-flap circuit requires significantly more than 1.5x the hardware of a flip-flop circuit, I think.

Many GNU comparison functions have trinary return codes. -1, 0, and +1. 0 is a match and -1 and +1 indicate the direction of a mismatch

I’m pretty sure that that’s done at the software level rather than the hardware level. It’s not actually a hardware level compare opcode.

In a lot of places the contract is that any negative number indicates “less than”, while any positive number indicates “greater than”, which allows a comparison between integers to be implemented efficiently as “return x - y”.

But that’s all done on a binary substrate.

For an interesting SF take on the idea of humans being base-4 computers, read James Hogan’s “Silver Shoes for a Princess”.

A few years back, there was a guy who claimed to be able to encode absurdly high volumes of data in the form of coloured geometric shapes printed on paper (I think it was hundreds of GB per page). Fairly sure it was a scam or at least never came to fruition.

I’ve seen it when my hardware team was doing ethernet testing.

From Wiki:

It looks kinda ugly on a scope.