Y1.8K.

I hope I am not the only person on these boards who remembers Y2K. It wasn’t that long ago, was it?

Anyway, in case you forgot, as a shortcut, in the 20th century, they started representing years as two digits, instead of four, in computer files. It worked fine for a while, until the dawn of the 21st century, for obvious reasons.

My question is, why didn’t this problem appear long before this, for people born in the 19th century?

Call it Y1.8K.

Take my grandfather. He was born in 1896. And he died in 1994. According to Y1.8K, he was -2 when he died? No?

Does anyone else know what I am taking about? Why wasn’t this a problem? Just recently, the last person born in the 19th century died. So it must have been rather commonplace, I would think.

Thank you to all who reply:)

The issue was not so much the encoding of birthdates, which would clearly need to handle a greater-than-one-century range, but rather time.
So, if you said that your SDMB membership expires in 1 year from ’99’ and the current date is ’00’, then the test (00-99)> 0 passes, when it should fail, while the test (100 - 99) > 0 would fail.

Birth dates would most likely be encoded with the full year.

It’s because it’s a problem with computers representing dates. Humans will understand from context what dates might mean. If, in 1904, you are looking at a written record for a person with year of birth represented as “02”, you are likely to know whether the person is a 2-year-old toddler or a 102-year-old great-grandparent. Computers can’t make guesses that way: if a computer system is running a pension scheme, and a pensioner appears to have an age of 2, or worse, a negative age, they might say in effect, “This person is too young, stop his pension.”

Can’t find a cite, but I vaguely remember a report of a woman getting a letter asking why she hadn’t enrolled for kindergarten. It turned out she was 105-years-old.

It did show up occasionally. I remember back in the 90s, when talk of Y2K was becoming widespread, a story of a 104-year-old woman who got a letter reminding her parents to sign her up for kindergarten. But by the time that computers were ubiquitous, people born before 1900 were uncommon, and so it didn’t happen much.

And shouldn’t you be talking about Y1.9K, not Y1.8K?

EDIT: I guess that old lady was a ninja, too. Pretty impressive, at her age.

With respect to the Swedish social security numbers, I know that the quick fix was to change a - sign into a +. Once your age exceeded 100 years, your SSN format would (or, rather, should!) change from xxyyzz-wwww to xxyyzz+wwww. Not all instances were automatically fixed, though – things like what whitetho mentions did happen sometimes!

It was never the problem many feared it would be. That’s not to say it could not have been much worse.

Birth records were manageable, as has been noted, at least by sentient beings. I have never heard of a computer that was self-aware.

The fears were real, certainly:

Russian defensive systems? Say there are a whole lot of missiles somewhere, all fitted with powerful warheads, ready for activation following command & control protocols. But somewhere along the line a bunch of not-so-bright people added some computer code, maybe for the purpose of enforcing a maintenance regimen, okay? Things like that typically also have “fail safe” mechanisms. So here are these missiles, and their automatic C&C system detects a failure condition (like maybe the missiles haven’t been maintained in an awfully long time). The problem is that “failure” can be handled in many ways. For example, the things simply power themselves down, or make themselves impossible to launch; those would have been proper responses. But suppose these missiles had been emplaced by a different government regime, such as the Soviet Union… What if the Soviet response to a failure condition had been to interpret it as proof of a past attack by NATO… And what if the coded response to a failure was to immediately launch all missiles toward predetermined targets? That was a very real fear among a lot of our DoD “smart people”.

Our at least fairly well-developed “limited ballistic missile defense”? See above, but change the actors around appropriately.

I was a systems analyst at the time, and there was a very real fear that an awful lot of things like computer code revision control systems would fail due to underflow/overflow/division-by-zero faults if a date field that wasn’t (as expected) iteratively and linearly progressing in a monotonically increasing direction–“99” suddenly became “00” instead of “100”–in such a case chaos might ensue, at least for a while.

In June, 1998, my government bosses determined that, if the ~500 computers I managed at the time were not all either patched to address these kinds of problems (or, if they could not be patched, they were not taken out of service) before October 1, 1998, the large battle lab where I worked would be taken completely off line, the power cut, and the doors locked. It was a rough few months!

Y2K could have been much worse, but many, many programmers worked long hours for months & years before that to fix anticipated problems. (And then afterwards, got reviled for ‘exaggerating the potential problem’.)

Most of the problems that weren’t caught & fixed turned out to be pretty minor, and people had heard enough about it that they just dealt with them. (People forewarned are surprisingly flexible & innovative.)

I worked on Y2K myself, for a major grocery wholesaler and the state hospital system. All the major problems (the systems will mess up and open the cell doors and allow the psychopathic killers loose; the messed up dates will issue release papers for inmates with over 100 year sentences (and guards will be stupid enough to obey those), etc.) were all fixed. Of the programs I worked on, only one had a minor problem. The next workday, I got a phone call from one institution reporting that the heading on a report of prescription medication dispensed the previous day said “Medication Dispensed on Jan 1, 19100”. They stated that the content of the report was correct, but they were laughing at the heading. Asked how much overtime they would get paid for working from Dec 31, 1999 through Jan 1, 19100? But none of the couple dozen other institutions had noticed the error, or thought it even worth reporting. I fixed it in under an hour, and sent a note to all of them offering to re-run the corrected report for them – none thought it worth bothering to re-run.

Most of the other programmers reported similar things – what was missed was minor problems, that people were capable of working around.

So a lot of us programmer types missed out on century-end parties, or had to keep a close eye on our pagers (remember those?) that night.

PS. We had started dealing with this problem years before. I remember a financial program that did 20-year amortizations that started failing in Jan, 1980, and had to be fixed.

Yes, there were real problems to be fixed, and yes, computer engineers across the globe did a good job of addressing those problems, but it’s also true that it was seriously over-hyped, and that most of the scenarios the media worried about were never remotely in the realm of possibility. To see the media reporting on it, every single device with an embedded microprocessor was suddenly going to go full-on Skynet kill-all-humans at the stroke of midnight. In reality, most of those embedded systems didn’t even know the date at all, or if they knew it, never used it, or if they used it, used it only for completely unimportant purposes, and even a lot of things that did use dates, used them in a completely different format that wasn’t subject to the Y2K bug at all (but might be subject to the Y2038 bug).

When people born in 1894 were filling in their birthdate, the form was manually read by a person with common sense, so there was no confusion. Both manual reading and common sense have pretty much gone out the window, it now appears to observers in this century that there was a problen tnat needed fixing.

I studied programming way back in the seventies. Even then, we were taught not to use two digits for a year. We were taught to use Julian Dates to represent times, dates, and durations, which are number of days since an arbitrary date in pre-history. This makes any sort of Y2K issue irrelevant. I never had a single worry about Y2K because I assumed all programmers were taught this.

There were lots of programs that used 2 digit dates. Programmers as a rule take every shortcut available. Code written in the 70s and 80s was never expected to still be in use in 2000. We fixed a whole bunch of bad code before the deadline; that’s why nothing bad happened. Nearly all important software packages and operating systems made sure. Yes, the danger was overblown but there was plenty of bad code out there.

In other words, humans are smarter than computers. Good thing, cause there were no computers in '94.

I had a friend that worked for a large PBX manufacturer. I was talking to him in the late 1990’s and he mentioned that he was in the middle of fixing their Y2K issues. I asked him how that was possible - the software had been written recently - how could they have let that happen? He said that it was deliberate. They had done a cost analysis and determined that the amount of data that they would save would reduce the cost of their storage by enough to more than pay for the re-write and testing later. He mentioned that their system stored and time-stamped every event in the system - every keypress, every off-hook, every flash, everything got logged. So, since hard drive costs were coming down so fast, it made sense to save a few bytes for every event, and upgrade the storage and software later.

Yes, the ‘shortcut’ of 2-digit years was not done by programmers, but by system designers. When writing programs, it’s just as easy to use 4-digit dates as 2 – still just as easy to do ages & date calculations. Still just one statement.

The point of 2-digit years was to reduce data storage space. Dates make up a huge percentage of data. Every purchase, every transport ticket, every bank deposit, every payroll check, nearly every transaction has at least one date to be stored. And storage on mainframe disks (or tape) was way more expensive than current storage.

Even more limited was main memory storage. You had to hold transaction data in main memory while it was being processed, so any reduction in size was worthwhile. (When I started programming, our mainframe had 6 megabytes of memory. Now most watches have more. And that 6 MB kept 100 programmers busy, and did all the processing for a county that was bigger than 10 US states and several European countries.) Memory was really precious back then, and great efforts were made to reduce memory use.

good news, though, in 20 years we might be having these same conversations about the “Year 2038 Problem.

In the 70s, when I was just starting my career and writing COBOL and RPGII on punch cards, I mentioned to an older programmer that the 2 digit years would be a problem when we got to 2000. His response was that the current programs would no longer be in use and computers would all be like HAL in 2001 A Space Odyssey.

I wasn’t convinced.

This is probably going to be a bigger issue with embedded systems than desktop or mainframes.
Unix / Linux has been Y2038 compliant for many years (at least on 64-bit systems)- only systems that can’t be easily updated are still vulnerable.

Are you sure you want them to be like HAL?

As mentioned above, the Year 2038 problem has been addressed for most major systems already. It got a lot of attention during Y2K and plans were put in place to fix it back then.