32 Bits iPhone App



32 Bits is a binary clock showing 32 bit Unix time. Watch the bits change and see the elapsed time since January 1, 1970. Tap any bit to show how often it changes and when it will change next.

Other binary clocks may be easier to read, but only 32 Bits shows the maximum information with the minimum bits.  It covers a 136 year period with a precision of 1 second, all in just 32 bits.

"Relaxen und watschen der Blinkenlights."

Available on the iTunes AppStore.

Why?

So, why make a binary clock that you can't read? That is an interesting story.

I was intrigued when I saw the first binary clocks. Shiny, blinking lights, and a chance to use the binary math.  What's not to like?

The problem is that most binary clocks are not truly binary; they use BCD (Binary Coded Decimal). To represent the number of seconds from 0 to 59 in binary requires 6 bits (59 = 32 + 16 + 8 + 2 + 1 = 25 + 24 + 23 + 21 + 20 = 111011). But most binary clocks use BCD by breaking the number into  separate digital digits and then encoding each one. Then the tens digit is done with 3 bits (5 = 4 + 1 = 22 + 20 = 101) and and the ones digit uses 4 bits (9 = 8 + 1 = 23 + 20 = 1001). It is easier to read BCD, because you only have to know the binary codes for 0 through 9.

The problem with BCD is that it doesn't follow the full binary sequence. You are peacefully counting along with it: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, then, blam with no warning, back to 0000. It makes sense to go from 9 to 0 if you were counting in decimal, but it is disconcerting in binary. 

This one goes to 1101

The other obvious solution is to encode the hours, minutes and seconds each as binary numbers. The minutes and seconds both use 6 bits, and the hours use 5 bits (either 5 bits for 0-23 or 4 bits for 1-12 plus 1 bit for AM/PM). Call this sexagesimal, because it puts the base 60 minutes and seconds into binary.

The sexagesimal clock is better than the BCD clock, because sexagesimal changes smoothly from 1001 to 1010.  But it has the same problem as BCD, because when it reaches 111011, blam, back to 000000. This only happens every minute and on the hour, but it still happens.

On the other hand, it is easier to see the BCD 101 1001 is 59 than the sexagesimal 111011, but if you want easy, then just use decimal digital clock.

This one goes to 11111111111111111111111111111111

Well, if having a correct binary clock means that it's harder to read, then  we might as well go whole hog and just use binary. That's how Unix computers keep time, by counting the seconds in binary. If the pesky humans need their times formatted to match the arbitrary orbit and rotation times of a random planet, then the computer will just figure out the years, months, days, hours, minutes and seconds when it needs to.

We could just count the seconds from midnight, counting up to 86400 with 17 bits, but then we have another discontinuity each midnight, going from 10101000101111111 to 00000000000000000. No, we need to pick one time on a particular date and count seconds from then. This is called the epoch. In Unix time, the epoch is midnight, January 1, 1970. If we use 32 bits as the seconds counter, then we can represent a time span of 2**32 seconds (4294967296 seconds, or just over 136 years).

Finally, we have a binary clock without discontinuities; one that counts smoothly from 0 all the way up to 11111111111111111111111111111111. Is it going to tell you if you are late for a meeting? No, but it sure looks pretty.