Ready for the Y2K38 Epoch Bug?

Just when Y2K is a distant memory, there’s a new problem on the horizon and I just tripped over it. The Unix Epoch Bug will hit in 2038 when the clock “rolls over” on pretty much every *nix system on the planet.

Computers being the faithfully logical things that they are, if you want something to be “forever” you have to give an actual date, even if it is arbitrarily far off in the future. Most Unix systems have adopted “today” plus 9,999 days as forever. That’s a bit over 27 years from now, and in the fast-changing world of computers, it is a pretty good approximation of forever.

But, it’s not actually forever, and the system accounts that I created that should not expire are being immediately expired as if they’d been created in 1902. Ugh… Can’t really blame Bell Labs for this - when they developed Unix in 1970, they probably didn’t expect people would actually still be using it 68 years later.

Now would be a good time to upgrade to 64-bit… The terminal date in 64-bit is over 292 billion years from now. :eek:

That’s vert shortsighted. The Y292B bug is going to suck.

Systems started addressing it before 2008 when people recognized that 30 year mortgage calculations would stop working. That was one of the drivers for 64-bit systems (or at least 64-bit time_t) at the time.

It really doesn’t matter since the world is going to end in 2012.

Nonsense – the world ended on July 5th, 1998.

I thought we already dealt with this.

In Summary:

Will there be absolutely zero problems related to this in the intervening three decades? No.

Will there be absolutely zero consequences from this? Probably not.

Will this be an excuse to run in circles, scream and shout, and Prepare For Doomsday? Most certainly.

Will this be a catastrophe? No.

In Geekery:

Most of the world that has to deal with this effectively already has, given how quickly 64-bit servers have been taken up starting with the MIPS R4000 and the Alpha in the early 1990s and progressing even faster when AMD introduced the 64-bit extension to the x86 architecture in 2003. This is because servers frequently need more than 4 gigabytes of RAM, which is beyond the limits of 32-bit pointers to address. Also, all the fast server-class CPUs are 64-bit at this point, AFAIK.

It’s possible to run 32-bit x86 binaries natively on 64-bit x86 hardware. However, they inherit the 4 gigabyte memory limit all 32-bit code has. This makes them much less attractive than native 64-bit code, especially if your server came with 8 or 16 gigs of physical RAM and your OS can use virtual memory to multiply that a few times.

Finally, servers are a lot more likely to run open-source software. Linux and the open-source BSDs have killed most of the classic closed-source Unixes and, therefore, run the Internet with relatively little serious competition from any closed-source software vulnerable to this problem. Therefore, any software anyone still cares about will be fixed in the intervening 28 years, and, since server software gets updated frequently due to security concerns, the date fixes have a pretty damn good chance of propagating to everyone anyone else cares about.

Thanks for pointing this out, Derleth. At work I’ve inherited a bunch of 32-bit apps, and the strategy is to run them on 64-bit servers alongside other native apps. It hadn’t occurred to me that the 32-bit apps would not be able to take advantage of memory > 4GB. They’re pretty small native “C” apps though, so they probably don’t need anything close to 4GB, but it’s good to know.