We won’t be around to see the panic if I found a loophole, but did the Y2K bug adjustment account for future century years that SHOULD be leap years by conventional wisdom, but WON’T be leap years by the book? Is this yet another little time bomb awaiting to reek havoc in the streets of our self-driving car (or, flying car) future? :eek:
def leapyr(n):
if n % 400 == 0:
return True
if n % 100 == 0:
return False
if n % 4 == 0:
return True
else:
return False
Probably.
OK, but how about the subtle shift in time between Epochs?
Actually, the real date the all of civilization comes crashing down is January 19th, 2038 … a fairly good chunk of us should be still alive.
From Wikipedia: “The Year 2038 problem is an issue for computing and data storage situations in which time values are stored or calculated as a signed 32-bit integer, and this number is interpreted as the number of seconds since 00:00:00 UTC on 1 January 1970 (“the epoch”).[1] Such implementations cannot encode times after 03:14:07 UTC on 19 January 2038, a problem similar to but not entirely analogous to the “Y2K problem” (also known as the “Millennium Bug”), in which 2-digit values representing the number of years since 1900 could not encode the year 2000 or later. Most 32-bit Unix-like systems store and manipulate time in this “Unix time” format, so the year 2038 problem is sometimes referred to as the “Unix Millennium Bug” by association.”
Leap seconds are routinely added by the USNO without serious side effects.
You are making a fundamental error. You presume that there was one single adjustment, and you are thus quite reasonably wondering what was included in that repair.
That’s not how it worked. Every single program and device has its own way of dealing with dates, and so each one needed its own fix. For example, the Microsoft Excel program on my desktop needs to be able to sort a list of dates, and it has to know that 12/15/99 is older than 01/08/00. The mainframe that produces my bank statement each month also needs to sort such dates, but it will be done in a different way by a different program, and will therefore need a different Y2K fix.
Side comment for anyone who doesn’t know what the OP means by “unLeap Year”: 1600, 2000, and 2400 are evenly divisible by 4, so you would think that they are leap years, but they are not. Computers need to know this, for at least two situations: (1) They should not accept “02/29/2000” as a valid date if someone types it in. (2) They need to ignore that date when counting the days between two dates. Both of these situations are different from the problem of sorting dates properly, and therefore it is quite possible that someone’s Y2K fix handled some of these but not others. I hope the casual reader is starting to get a sense of what a mess this was, from the programmer’s perspective.
No, UncleRojelio’s code sample is correct; 1600 and 2000 were both leap years, and 2400 will be if we’re on the same calendar system, because the number of centuries is still a multiple of 4. However, 1700, 1800, and 1900 were not, and 2100 won’t be.
“02/29/2000” is not only a valid date, but apparently a noteworthy date in this account of the Second Chechen War.
:smack: :smack: :smack: I knew that. But I was typing too fast to think about it. :smack: :smack: :smack:
Thanks.
A lot of computer programs use a simplified rule of “all years divisible by 4 are leap years”, omitting the subsequent two clauses, “but years divisible by 100 aren’t” and “but years divisible by 400 are”.
It just so happens that the errors caused by the two omitted clauses cancel out in the only centenial year so far where computerized algorithms are relevant.
I admit I knowingly wrote code that simply divided by 4 because I did not expect my work to be in use in 2100. If your TV doesn’t record the February 29th, 2100 episode of “CSI: Luna City” you can blame me.
Bloatware. :rolleyes: Why not just
#define NOTLEAP(Y) (Y%(Y%100?4:400))
I think that thinking intensive global focus on an endemic date computing error would exclude people who know the subtleties of the leap year calculation is a bit of a leap. Just sayin’.
(Our twins were almost born that Feb 29, but things settled down and they waited a few more days to invade planet Earth. I still think that would have been so damned cool…)
Do they count the actual seconds including leap seconds when added? OR do the assume each day is 606024 seconds?
#define LEAP(Y) !NOTLEAP(Y)
When added to what? Not sure what you’re asking here. But there have only been 26 leap seconds added since 1972, and there will probably be only a few more by 2038, so they’re not going to significantly affect the actual moment when the Unix time overflows 32 bits.
–Mark
You can even save a couple more characters and drive people even more nuts trying to figure out your code and what the hell it means with y%(y%25?4:16).
dBase had a Y2K+1 bug. Everything was OK until the point where there had been 101 years in the century (some standards were hardcoded to count from 1900, others from the last year ending 00)
I hope they also fixed the Y10K bug, or all those systems are going to spit out the wrong results in 10,000 AD. They hard-coded that the year could only have 4 digits? YOU MANIACS! YOU BLEW IT UP! AH, DAMN YOU! GOD DAMN YOU ALL TO HELL!
Professor: “…and so our sun can be expected to burn out in about fifty million years.”
Dozing Student: “What? How many years?”
Prof: “About fifty million.”
Dozer: “Oh, thank god. I thought you said *fifteen *million.”
Yes … good point … a quickie search and I came up with timestamp just repeating the final second of the day leap second is imposed. Then a table is maintained of each leap second and consulted when time calculations are required. This is a problem since this table has to be updated every time there’s a leap second and that isn’t always practical. This means your coffee-maker will detonate a few seconds before the end of the world. Maybe this is a feature, gives everyone a chance to kiss their sweet ass goodbye …