Have any Y2010 bugs been noted yet?

meaning, instances of programmers or designers of display hardware assuming the current year being of form 200x ?

After Y2K issues I hope nobody was dumb enough to write software that would be screwed up by a new decade.

The software we use at my job was internal and never fixed. But since it was internal it was a real quick easy fix.

Some ATMs in Australia think that it’s the year 2016.

Some cellphones are dating texts as being from 2016, too.

The suspicion is that the bugs result from misinterpreting binary coded decimal numbers as hexadecimal numbers, which works for numbers less than 10 but would cause you to interpret 10 as 16.
Edit: Oops, it seems that the Eftpos terminals are debit/credit card processors for cash registers.

There were no 1990 bugs, (outside a few rare sites with truly incompetent programmers), so a 2010 bug would be extremely unlikely.

You might want to consider what the Y2K bug actually meant.

When computers first got going, all storage space was at a premium. It was pretty much the norm for a very long time to store dates as a two-digit year, two-digit month, and two-digit day* in whatever format made the most sense in the country where the code was used. When calculating the age of some period, subtracting 63 from 70 gave exactly the same answer as subtracting 1963 from 1970, and the systems rolled along quite nicely into the early 1990s, while hundreds of thousands of storage bytes were saved. (Recall that the first “storage records” were often 80 character Hollerith cards and that even by 1980, a very large track on a disk held only 19,069 bytes with only 572,070 bytes on a large cylinder costing over $10,000.)
Now, across those forty to fifty years, both hardware and software improved astronomically, so that the old limitations were no longer actual limits. On the other hand, it costs money to upgrade systems, so as long as they worked, a lot of folks did not want to invest in changes. (And it was not merely a matter of changing one system at a time: if the payroll system was changed to make the date of hire a four-digit year, then the interchanges of data with the benefits system had to be changed to pass that information back and forth or to convert the year from two to four digits or back. If the date on a check writing system was changed, then the general ledger had to handle it along with the check reconciliation process and at least one of the systems, (payroll, accounts payable, traffic, or whatever), that relied on the checks being written.

When the year 2000 rolled up, all the systems that calculated dates using two-digit years were in danger, because subtracting 98 from 00 gave a very different answer from subtracting 1998 from 2000, especially when the result was used in a “less than” or “greater than” test.

Now, by 2000, most systems that did forecasting had already undergone the expense of being changed, because they were going to have begun failing years earlier. (Banks calculating 30 year mortgages probably fixed those systems before 1970 while insurance companies calculating their extended payouts had probably never been in jeopardy.) However, enough systems were still operating on the cheap that the computer industry, as a whole, faced a very sudden set of expenses to play catch up. (And it was not usually the programmer who created the last minute aspect of the problems: I was coding for four-digit years in 1980, the year I entered the field, but kept encountering management decisions to postpone the changes needed to connect with other systems. That is not to say that there were not some really stupid programmers out there, of course.)

Since the problem was a century problem, it would require an incredibly inept computer system to have a decade problem, (although I did have periodic contact with a manufacturing forecasting system written in the 1950s that used a 1-digit year into the 1990s, requiring a coding change two years before each decade and a reset the year after each decade for the entire life of the system. :rolleyes::D).

  • There were other ways to shorten the date: the Julian system used a three-digit day-of-year instead of the four-digit month and day and the Falk calendar used a two digit week and a one-digit day. On the other hand, to the extent that they continued to employ a two-digit year, they really did not improve the situation.

Rysto’s post brings up an interesting possibility. Most of the issues in the Y2K scenario arose from programmers coding their own dates into data systems and shortening the fields to save space. With the advent of Unix, and then microprocessors, a lot of computers rely on a date calculated from the number of seconds since some arbitrary earlier date specified in binary format. When the date overflows the (currently) 32-bit field, any Unix(-related) system will begin to crash. However, that date occurs in 2038, and I would guess that with storage continuing to expand and hardware/operating systems moving to 64 bit fields, that should be addressed with more foresight and less alarm than the Y2K bug.
(Of course, given human nature, it probably will not.)

I will not be surprised to discover that some coder took some short cut in the Bank of Queensland’s software that triggered their error, but since only one bank and some cell phones have reported the trouble, it still looks pretty isolated to me.

I work with an embedded controller that has a realtime clock that will rollover in 2050. This will break all our code, but I figure that if any of these are still in use by 2050 I won’t care - they’ll be obsolete enough that they should be replaced, and I’ll probably be dead, anyway.

I sure hope you were joking. That’s EXACTLY the attitude that caused the Y2K problems.

Programmers in the 1950’s and 1960’s were sure that the systems would be replaced by 2000, but they weren’t, and were still in use!

Yeah, well tough.

It’s not mission-critical, and if it fails in 40 years, they can stick a crowbar in their wallet and buy a more modern system.

Well, hmmmm.

I still have several packages of “cost-cutter” toilet paper (one-ply: finger pops through) and 6 bags of cheap kitty litter I bought from the first “scare” (I guess me and my kitty thought there would be food), if anyone wants it.

Q

I ran into a 1990 bug.

I was working with a Wang VS system. The program in question stored its data in directories of the form XXDATA, where XX was the two-digit year. Note that there was no Y2K issue with this convention, as the directory names not used for calculation or sorting. (Not saying it was a great convention, just that it worked for its purposes.)

However, in 1988, I noticed that, on the Wang VS operating system of that time, directories starting with the digit 9 were reserved for use by the OS. Thus, the directory 9XDATA would be invalid. I had to rewrite the code to use a different directory naming convention - not a difficult task.

A true 1990 bug.

Of course, we just trade it for the java Y 292 million bug. …

It would have been the Y 292 billion bug, except they wantonly went and made the date a count of milliseconds instead of seconds …

(Since java went and changed the scale anyway, maybe they should have moved the epoch back to the Julian Day epoch date in 4713 BC, and been able to represent all reasonable historic dates as positive numbers … and kept lots of programmers busy providing hundreds of historic pre-Gregorian Calendar implementations for various medieval kingdoms and so on.)

I have encountered two dates that included “1910”.

Most Unix programmers talk about a Y2K38 bug. Unix tracks the number of seconds since January 1, 1970. On January 19, 2038, the number of seconds will be too big to fit into a 32 bit register causing planes to fall from the sky, the banking system to collapse, and the wrong date to be shown on your cellphone.

There is no easy fix for this. Although most computers will be 64 bits, the t_time routines have to maintain 32 bit capability. We’re all doomed!

Us programmers are hoping that the meteor will crash into the Earth in 2029, so that we don’t have to handle this problem.

See Year 2038 problem - Wikipedia for more information.

I was working with the internals of Intuit Quickbooks in the late 90s (Versin 5 IIRC) and dates were storedas two digit offests from (again IIRC) 1928.

Why 1928? Who knows. Supposedly it was meant to allow users the chance to import old records, and still leave enough time for new records.

I don’t know if that internal data structure was ever changed. It was clearly sufficient to weather y2k, but not what Intuit was then seeing as the lifetime f the product itself. The attitude was, we will have plenty of time to fix it should it ever be a problem.

I have no idea if quicken worked the same way, they were separate code bases at the time.

But the broader point is that any given product might use a 2 year offset from some arbitrary 20thth century year even today, and since that date is arbitrary, if could have just gone haywire.

And yes, changing the definition of where the offset is from could have been a poorman’s y2k fix in many cases (although it was pre-existing in quickbooks as I mentioned)

In addition to the previously mentioned BCD/hex bugs mentioned where the year jumped to 2016 in some places, there was also a bug in the Spam Assassin rules. Starting in 2010, it mistakenly scored all email as if the header date was way in the future. The scoring system is used to determine whether or not something’s spam, so all emails were getting a bunch of penalty points applied, greatly increasing the chances of false positives.

I can’t speak for other systems, but on 64-bit FreeBSD time_t is 64-bits wide. My impression was that this is true for all 64-bits Unices.

1910 is my mother’s birth year. Just what are you insinuating? Don’t be bad mouthin my mother; I ain’t goin for it.

>The suspicion is that the bugs result from misinterpreting binary coded decimal numbers as hexadecimal numbers, which works for numbers less than 10 but would cause you to interpret 10 as 16.

This is what is most likely happening. Its a limited case, unlike Y2K where it was more common because you could save some memory by using a 2 digit date instead of a 4 digit date.

We fixed the 2038 bug when we fixed the Y2K bug back in the 90’s in Digital Unix. I think most versions of Unix fixed it back then. The problem (as mentioned in the Wiki page) is embedded systems that haven’t been updated in years.