Y2K = leap year is a computer problem?

Okay, this is not about the Y2K problem as such. It’s been discussed, and I think I know just about everything about it.

What’s not been discussed (at least I couldn’t find anything) is why the year 2000 being a leap year is supposed to be a problem, which is what I keep reading recently.

IIRC, every year divisible by 100 is NOT a leap year, but those divisible by 400 ARE leap years. Now, if I had programmed any of those legacy systems, I would have done one of two things:
a) Just implement the simple 4-year rule, out of convenience or ignorance. That would have been right (although accidentally) until the year 2100.
b) Go whole hog and provide the 100- and 400-year exceptions. That would have been safe until the year 4000 or so.

Obviously, the problem could exist only if someone implemented the 100-year exception but not the 400-year one. So I guess my real question is: Who the heck would ever do that?!

Dunno who would bother to have programmed a computer to ignore leap years every 100 years, but…

For those interested, here’s the Mailbag item on why 2000 is a leap year:
http://www.straightdope.com/mailbag/mleapyr.html

Working on Y2K and other date problems, I found two such dealing with Leap Day 2000 using a program called ObjectView.
[list=a][li]The GUI/Database interface program ObjectView had a simple function called getdate() which returned a string in the format MM/DD/YY or MM/DD/YYY, the latter being for dates in the 2000’s (E.g., on Oct. 29, 2000, it would return 10/29/100). But if the PC date was 2/29/2000, the function returned a zero-length string. (Obviously, they only employed part of the Gregorian calendar rules.) This played havoc with a lot of our project’s functionality.[/li]
Users weren’t supposed to enter dates later than today for most date fields. To check dates, we compared them to the date from getdate(). But since every string is greater than “”, every date entered on 2/29 was declared invalid.
[li]For inserting dates into the database, we’d call a function called convertdate() with parameters specifying the date in MM/DD/YY(Y) and a return string format of DD-MON-YYYY. If the year was under 100, it was assumed to be in this century*; greater than or equal to 100 was in the next century.[/li]
“This” and “next” were relative terms, though. So in this century (1900’s), the dates came out fine; if you passed convertdate() “04/04/108”, 04-Jul-2008 would be returned. But if the PC clock was set past 1/1/2000, the same function call would return 04-Jul-2108! (“10/29/99” would return “29-Oct-2099” because 99 was in “this” century [2000’s]) So passing it “02/29/100” (which we couldn’t get until I wrote a work-around for getdate()) in the next century made it return “29-Feb-2100”. But since this is not a valid date, Oracle rejected it when it was submitted for insert.[/list=a]
*** - When I say century, I am referring to the nn-hundreds type of century; e.g., the 1900’s, 2000’s.**

P.S. - The year 2000 is going to be a slow, quiet year, because everyone is saying that every event is the last of its kind in the 20th century. Since the 20th doesn’t end until 12/31/2000, we’ll have 12 months to recover from the Y2K riots. :slight_smile:

I too have wondered this exact question. But we don’t really think anyone’s gonna fess up, do we?

In Advanced UNIX Programming by Marc Rochkind, one of those seminal programming books that every CS major used to own, he published code for a time program that took into account the 100 year rule, but not the 400 year rule. Given that this book was very influential, I bet that the code got copied into a lot of running programs. I’ve used it myself, but I fixed it during the transcription. I’m pretty sure Rochkind just didn’t know about the 400 year rule. I hope his other disciples did.

Wow. I had no idea the leap year problem was that complicated. Thanks for a very enlightening article, Dex.

How does our practice of adding leap seconds to our atomic clocks relate to these calendar issues? Is that a whole 'nother column?

Hmm … I’d never heard of that 4000 year rule. My programs may break down in the year 4000, but you know what? I really don’t care.

I think leap seconds are added to correct for irregularities in the Earth’s rotation, not its orbit. I’m sure someone will have more information about that.

Thanks, Greg, for that Rochkind hint. This is the kind of stuff I was looking for. I don’t know the book, but I assume it was published by a professional publishing house? Amazing what you can sneak by the proofreaders…

I know of software that simply assumed that every year was a leap year if the final digit of the year was divisible by four. Worked fine until 1990. People screw up; it happens.

As to leap seconds, very few computers have clocks accurate enough for leap-seconds to matter. Those that do will normally have an external time source sync’ed to them, and are equipped to run their clocks a teensy bit slow or fast so as to bring them gently into conformance.


John W. Kennedy
“Compact is becoming contract; man only earns and pays.”
– Charles Williams