I recently came across this clock, but I do not understand how to read it. Can anyone with a computer-science background break it down for me? It is called an “LED Binary Clock” and an image that is supposed to help explain everything is available here.
OK, you look at the columns of lights and figure out one digit for each column, and then those 6 digits convey the time. For each column, you add together a 1 if the bottom is lit, plus a 2 if the next one up is lit, plus a 4 if the one next to the top is lit, plus an 8 if the top is lit. Try that - make sense?
An ex of mine had this clock and even despite repeated explanations I could only read it with lots of deliberation and concentration. And even then I was never really accurate. It always made me feel stupid.
Napier: Thanks, I had to re-read your explanation a few times but I think I’ve got the hang of it now.
now to decide if its worth buying.
The reason you can’t figure it out is because the display is a crummy hybrid between binary and decimal notation.
The picture shows three double rows of dots. The three groups correspond to hours, minutes, and seconds.
The right side of each paired row counts 1s, 2s, 4s, and 8s. These are the binary numbers 0001, 0010, 0100, and 1000. Add the lit dots in each row to get the number shown in the example photo.
The crummy decimal hybridization comes in with the second pair. These are the decimal numbers 10, 20, and 40. These don’t correspond to any simple binary notation, ( 00001010, 00010100, 00101000 if memory serves).
The only people I know that use binary notation are programmers who work in assembly language. This is a very tiny subset of computer programmers. Someone who actually uses binary would sneer at this whacked implementation.
Walt
it is BCD, binary coded decimal. it is a useful representation in electronics.
though the clock requires some math to read which is slow for most.
Hogswallop. Anybody who calls themselves a programmer had better be able to understand simple binary arithmetic, even if they’ve never cracked open an asm book in their lives.
I agree that the clock is crummy, though. Would be better if it had five bits for the hour (0-23) and six each for the minutes and seconds.
The columns read from left to right with two columns each for hours, minutes, and seconds. On each pair of columns the left is for tens and the right is for ones. For example to represent the number 36 a 3 is in the tens column and a 6 is in the ones column.
So that’s how it’s laid out, to convert from binary to decimal you need to know that the LEDs are read from bottom to top as 1, 2, 4, and 8. Some columns don’t go all the way to 8 because they don’t need to. The tens column for hours only needs to display 0 to 2 for example. Multiple lights in one column are added to get the correct number. 3 is represented by the bottom light (1) and the next light up (2), 7 would be shown by the bottom light (1), the next light up (2), and the next one up as well (4).
In the example given, 10:48:36 is shown by one light in the tens of hours column for 10, no lights in the hours column for 0, a light on the 4 row of the tens of minutes column for 40, a light on the 8 row of the minutes column for 8, a light in the 2 row and 1 row of the tens of seconds column, add them together to get 30, finally a light is in the 4 row and 2 row of the seconds column, add them together to get 6.
Edit: too sloooooooow.
I think you are mistaken about BCD. It’s not “crummy decimal hybridization”. The tens column of each group simply doesn’t have the unused LEDs. Tens of hours never exceed ‘2’ (0010 in BCD), tens of minutes and seconds never exceed ‘5’ (0101 in BCD), so there’s no point in having the unused LEDs.
BCD used to be much more relevant than it currently is. Some mainframes and early minis used to have hardware instructions for doing arithmetic in these representations directly. Either in a packed form, or with each digit represented as one byte containing an EBCDIC character, which had the binary representation in its lower 4 bits (I can tell you that Sigmas had a group of registers which functioned as a “decimal accumulator” to operate on numbers in the latter representation). Certain applications still carry data this way, though the arithmetic will now usually be done in software. If fixed point calculations with no round off, and easy scaling by powers of 10 are important, it has some advantages. And lower level discrete electronics still uses it, as noted. The circuit geeks were typically the guys who built those clocks, not programmers.
I have one of those clocks in my office. I can switch it to “pure” binary, in which case the bottom three rows become hours, minutes and seconds. (The top row is unused.) I’ve always used it in BCD, though.
It took me a little to get it since it’s binary-decimal wise instead of just binary (I tried to continue from one column to another and kept getting a number too large), but here’s a breakdown of how to read it.
Starting top down for each digit:
Start with a value of 0. For each lit up light (“1”) multiply the previous value by two and add one. For each that is off (“0”) simply multiply by two.
So for the example on the page
0 1 0
0 1 0 0 1
0 0 0 0 1 1
1 0 0 0 1 0
We get (from the left)
0, 0 * 2, (0 * 2) + 1 = 1
0, (0 * 2), [you get the idea] = 0 {10 hours}
0, (0 * 2) + 1 = 1, (1 *2) = 2, (2 * 2) = 4
0, (0 * 2) + 1 = 1, (1 * 2) = 2, (2 * 2) = 4, (4 * 2) = 8 {48 minutes}
0, (0 * 2), (0 * 2) + 1 = 1, (1 * 2) + 1 = 3
0, (0 * 2), (0 * 2) + 1 = 1, (1 * 2) + 1 = 3, (3 * 2)= 6 {36 minutes}
Of course, good luck doing the seconds before they change.
I find most other methods of converting from base 2 to base 10 clunky and more problematic. The most common way I’ve heard is converting each digit based on its power of two, with the rightmost (or bottom) digit being 2[sup]0[/sup], but I find the doubling and adding one much easier, even if you know your powers of two fairly high like I do.
I’ve often thought about how one would build a mechanical binary clock. It’s easy enough to make a series of two-state flippers that can rotate and knock the next one down the line. (Think of them like a line of one-toothed gears.) These could be used to indicate the bits. But then you’d have to figure out a way to clear each bit field after 60 seconds instead of the 64 it would take to reset six bits normally.
Hmm, this is an interesting problem. Could you set up a hidden “housekeeping bit” or two to trigger the change? You’d probably be able to do the same for hours (the position of said bit depending on whether you want a 12 or 24 hour clock)
I’ve seen kinetic sculptures like this. The binary part would be easy, the video shows a gate that alternately sends balls down one of two tracks. It would be tough to make one that kept time to the second, though. To reset the minutes, you could have an accumulator at the bottom, and when it had collected enough balls, the weight would trigger a switch that emptied the minute displays and direct a ball to roll down the hours track.
Even better was a clock I saw in Berlin that kept time with colored water flowing through glass pipes. Imagine the S-bend in a typical drain, but stretched to be about twenty feet tall. Every two minutes, a measured amount of water would flow into the top. The height of that column was the minutes display. At 60 minutes, the water column would reach the top of the second bend and start flowing over and out; and that created a siphon that drained the column all the way to the bottom, and triggered another measured amount into a column that tracked the hours. It was brilliant. I’ve also seen a binary half-adder that worked the same way.
It’s simple BCD, not a crummy hybrid. It took me one minute to read it (although I had to scroll down to make sure I had interpreted it correctly. Had I been more accustomed to binary, I could have read it faster.
There is nothing wrong with mixed base systems and there is at least one we all use daily that is not only mixed base but one of the bases is variable. Let me describe it. The first (least significant) four digits are two decimal coded sexagismal numbers. The next two are in decimal coded base 24. The next one is decimal coded in a base that varies between 28 and 31 depending on a moderately complicated function of the highest significant digits which are plain decimal. Not only do we learn this system (well, most of us do), but we use it every day.
I love my binary clock. I can calculate the time pretty quickly (though there’s an analog one on the wall right behind it), and it’s rather meditative to watch. Plus I like to wait for it to form interesting patterns (rectangles, arrows, etc.).
Binary-coded decimal is far from esoteric: if you do any commercial programming at all you’re likely to have heard of it at least, and if you write in COBOL then you certainly know about and regularly use COMPUTATIONAL-3 representation, better known as COMP-3 or (since COBOL II at least) PACKED-DECIMAL. It’s not quite as efficient as true binary but it converts to external decimal much more easily, and is very often hardware-supported as mentioned above.
I’m pretty sure that all x86 processors have instructions that act on BCD quantities. However, the instructions are almost certainly emulated in microcode and are extremely likely to be slower than a software BCD implementation.
I think the clock in the OP is a mix of bases 10, 2, 60, and 24. I also think there IS something wrong with mixed base systems - they are complicated. They are not the numbers, but a representation of them. Any representation of anything that is complicated enough to be hard to interpret has that wrong with it, though it may be perfectly valid (which this is) or may arise for some worthwhile technical or mathematical reason (which e.g. base 2 does but, I think, this doesn’t).
This clock is a curiosity, and cute in a way, and I think if I owned one I’d plug it in and keep it set, at least for a while until I got tired of carrying it through the DST change twice a year. But as ambassador for base 2 it wouldn’t be much of a success.