Y2K

The thing the Y2K “problem” created was huge run up of the stockmarket for DOT COM. It was a perfect storm:

  1. The internet came into it’s own as a critical mass
  2. Windows 95 enabled browsers to work which enabled the internet to come into it’s own.
  3. The Y2K “problem” made it easier for more people to more easily justify spending money (software, hardware, internet routers, development, … whatever). Many investments justified on the issue of Y2K and was rarely questioned.
  4. Lots of buying made for growing companies
  5. Growing companies made for growth in the stockmarket and more fortunes
  6. More fortunes led to more spending
  7. when led to …

Internet + Windows 95 + Y2k = Perfect Storm

The problem with this is as i and several others have already attested. A lot of programs would have had terrible Y2K failures, many of which would have kicked in way before the actual date.We know this because we personaly spent many, many, hours fixing them.

Y2k was a real problem for some (some would have lost far, far more than their Y2k fixit bills cost), but was hyped WAY beyond what it ever should have been, and to the point where those who knew better were unable to convince the decision makers. I worked from 1998 to 2006 in embedded electronics, so I’m in a position to know, but don’t take my word for it – figure it out yourself.

Consider this: virtually any system that MIGHT have had Y2k problems could have been tested beforehand by the simple expedient of changing the computer’s on-board clock to some date after 2000. Takes 10 seconds plus a reboot, and can be done on representative systems, rather than all of them. If a system DID exhibit a problem, THEN you could start throwing money at it.

Furthermore, the only programs that could POSSIBLY have a problem with Y2k are the ones that actually cared what the date was. This means that, say, 99%+ of all programs were immediately out of the woods. Any company spending any time or money on programs of this nature for Y2k compliance would have been better off buying $600 bolts like the Pentagon. It would both have cost less and given them more (the bolt).

Out of what was left, only programs that would actually check that the date was in certain bounds, or that actually exhibited poor behavior really needed to be changed. If your library book was marked that it needed to be returned in 1901 instead of 2001 you might chuckle, but you wouldn’t really care, and neither would the library staff. This drops another at least 50% of the field. These others may well need to be changed someday, but it was entirely cosmetic, and as any programmer knows, there are many more serious problems to worry about than the cosmetic ones. Money and time here could have been better spent fixing serious bugs and moving to the cosmetic ones when time and money were available, rather than getting a loan expressly to pay for a Y2k compliance team.

Out of the remaining programs that could be affected, and seriously (by either stopping because of the out-of-bounds check or getting seriously wrong results in, say, an interest accounting system), some 99%+ of THESE were money-oriented. Failure could cause serious loss of income or major problems in allowing work to continue. These problems would need to be fixed, and before Y2k (or you lose tons of money), but were not life-threatening.

Only an extremely small fraction of programs control dangerous machinery and are date-controlled. Seriously, why would your car’s ignition system care what the date was? Why would an airplane’s flight systems care? What possible use would that data have? And don’t say “on-board clock” because that would be controlled by a completely separate clock chip – they wouldn’t even be on the same circuit-board.

Any life-and-death control system would have had much more serious scrutiny, and would never have had the Y2k bug in the first place. They certainly wouldn’t have run on Windows! They generally use a proprietary RTOS… one where they know what each and every line of code is doing and why it is doing it.

The Y2k hucksters that I recall hearing on TV were saying things like “bridges will fall” (seriously!) and that all computers everywhere, including your watch, your microwave, and your house’s heating systems would stop completely if not actually blow up. I’ll admit that that kind of announcement was beyond the norm, but it IS what sticks out most in my memory.

Most people touting Y2k changes said at least that all financial institutions would fail, which was clearly false even though the bug would absolutely have caused a lot of confusion and cost quite a bit of money to resolve. At most, if NO spending on Y2k compliance had been done, all computer-aided business would have ground to a halt… for the space of a few weeks. A major catastrophe, to be sure, but not the end of the world. People have survived without computers for much longer stretches in the past (ie. the entire stretch of time pre-ENIAC).

The Y2k problem was one of the biggest scams of the age. It caused more monetary damage (in the form of the fixit bills that were unnecessary) than probably any virus or maybe even all of them together (I’d need to research that). There WERE Y2k problems and they DID need to be fixed. Not all of the money spent on Y2k compliance was wasted, not by any stretch (indeed, the amount spent may well have been less than the cost would have been if nothing had been done). But the majority of companies didn’t even need to think about it, let alone spend the thousands that they did.

To say that it’s better to err on the side of safety is a good layman’s response. If you don’t actually know the technology and have to make the decision in a vacuum, it’s probably even the best response. But the people who were in a position to understand should have been making the decisions, and those people should have known better. Hindsight has nothing to do with it.

The only reason that the Y2k fixit industry even existed is because some unscrupulous people hyped the problem too far, and executives (and shareholders) who were scared by the hype told their people that they had to fix the problems and wouldn’t take “there is no problem” as an answer.

The Y2k problem was an extremely big and costly testament to the inefficiency of top-down management and bureaucracy.

It was also a testament to the power and misuse of the media, who took “there’s a potential problem” and jumped on it as something that would help them get ratings.

Unlike a virus or Ponzi scheme, there was no one person who could stand and claim credit or make money from this scam. Instead, there were many people who stood to take a small piece of the pie. Perhaps that’s why it was able to cause so much damage.

Never mind storage costs – a lot of mainframe code had been created in the 1960s when working files sometimes, and raw input almost always, were in punched cards. Punched cards had, depending on the type, absolute limits of 80, 90, or 96 characters of data (some tear-off-and-return cards even less) – and many had already been badly squeezed in the US by the addition of ZIP codes. (The two-character state abbreviations that we use today were authorized by the Post Office as a partial tradeoff for that.) Before the year-2000 problem, in fact, I remember the year-1970 problem.

Actually, the first hit of the Y2K problem that I’m aware of was on August 16, 1972, when IBM’s OS/360 started discarding files that were being labeled “Retain for 9999 days.” (Fixing it involved double special casing: a retention period of exactly 9999 days was changed to be interpreted as retaining until December 31, 1999, and retaining until December 31, 1999 was changed to mean retaining forever. This meant that some files would be marked “retain forever” when they shouldn’t be, but that was accepted as a reasonable trade-off.)

What a lot of people seem to be missing is that this isn’t an either-or scenario.

Clearly, if no money had been spent by anyone in the time from, say, 1995 to 2000 on fixing Y2k bugs, major catastrophe would have resulted. Not life-and-death catastrophe, but economic catastrophe.

Also, clearly, some companies did NOT need to spend on Y2k compliance, and yet they did. Many programs never looked at the date, and yet executives and users clamored for “Y2K compliance” stickers regardless.

Money spent in the first case was well-spent, and that in the second was not. My own assertion is that while the former case was well covered (thus, the date passed uneventfully), the latter was overspent considerably, to the tune of billions of dollars that could have been spent on making the programs themselves better.

[Cleans up desk.]Please don’t write stuff like that while I’m drinking Coke. Thanks!

At my last job, I fixed a Y2K bug in a web application in early '99. The original programmer had left the company not long before … I verified it was a needed fix–the program would have failed starting around October (it was also looking at near-future dates). The program was originally designed in … 1998. :smack:

In the column, Cecil holds Italy up as a counter-example to say that it’s “not credible” to figure that the more technologically developed countries had more need to worry about Y2K. To me, this is itself a not credible response.

The critical piece of data would not be how many computers you had, but where your software was written. More precisely, it’s where the software was written that 1) as still in service in 1999 and 2) had significant portions that were written decades before. So we would have to know whether Italy had a significant software industry generating large-scale systems in the '70s and '80s. Very different from “having plenty of computers”.

I fixed a Y2K bug that, while it was only a little one, and partly caused by software that was near obsolete anyway, was business-critical to the company I worked for (their stock and sales system would have completely stopped working).

It was an incompatibility between dBase and Microsoft’s interpretation of the xBase standard. The second byte in a dBase header file contains a number that represents the number of years since the last turn of the century. Microsoft’s dbase driver interprets this as the number of years since 1900, so carries on counting 100, 101, 102, etc, whereas dBase clicks back to 0,1,2 etc.

dBase III (understandably) chokes to death on a file written 101 years since the turn of the last century. I had to write a low-level patch to repair the files any time an MS driver had touched them.

So for me, at least, Y2K wasn’t a scam. It wasn’t the end of the world, but there was real work to do because of it.

The only scam involved in Y2K was the mass hysteria. There was a lot of work that absolutely needed to be done, and a lot of work was done just to be sure, but the public sense of impending doom just didn’t give enough credit to the people who design and maintain computer systems. Many, if not most, Y2K problems were fixed long before the media even suggested they might be a problem.

A college buddy’s dad did Y2K work (along with other systems upgrades) on a local nuclear power plant in 1992 and yet the local paper didn’t mention this fact at all when they ran a story in 1999 about the dangers of failure at the plant because of all these computers systems that were installed in the 1960s when no one had thought about Y2K. The facts were just not all that important to the reporter.

(As a sideline, what scared me about the friend’s dad’s story was the fact that the other systems upgrades included replacing paper tape inputs with magnetic media. I can imagine that a nuclear power plant doesn’t want to be a first adopter, but come on, guys!)

As a fellow FORTRAN programmer, I’m going to suggest that you need to talk to a COBOL programmer. Seriously, dude. Those were the fellows that built most of the business-oriented programs back then, and most of their dates were stored in the form YYMMDD or YYYYMMDD.

And I should point out that, as a C/ Visual Basic programmer back then, I generated more than my fair share of Y2K bugs. My standing joke back then was that on 1/1/2000 the media folks were going to annnounce that the Y2K bug turned out to be minimal “except for [holding me by the scruff of the neck and raising my face to the camera] everything this fellow ever wrote”.

On the systems I worked on (mostly COBOL and EZTrieve (not sure if that’s the right spelling), dates were stored in Julian format, but were converted to YY(YY)MMDD for actual processing.

I remember watching my dad write a check when I was a kid. This was, perhaps, 1966 or so. I noted that the check had pre-printed the date area like this: _______, 19. Dad wrote in the date, and appended “66” after the century.

I asked him what would happen when the year 2000 came around and all these checks wouldn’t fit any more. He said that there would be plenty of time to fix the checks, as it wouldn’t happen for another 34 years.

Come 11:59 on December 31, 1999, there I am, at 42, in the computer room, making sure that the transition happens without incident, thinking, “34 years is plenty of time. Right!”

Very incorrect.
These are all correct:

As a person that contributed to the problem during the 80’s and 90’s and fixed the problems before and after Y2k, I can tell you that it was a very real issue. I worked on ERP software and the impact was economic.

For the companies that needed software to be fixed after Y2K (at least the ones I either worked on or was aware of), the business did not stop, but it certainly cost them significant amounts of money, typically to manually work around the problems.

Much of the money was well-spent.

I programmed my way through Y2K. All you had to do was set the clock to 12-31-1999 and let your applications run (on a test system).

All the financial and ordering software at my firm had to be fixed, let alone the required FERC mandated changes.

In August 1994, I walked into a credit union to deposit a paycheck. After looking at the check, the clerk refused to process it because, she said, it was older than six months. You see, dates, in French, start with the day, then the month, then the year. The check, written in English, was dated “08/01/94” which, to the French-speaking clerk, meant it had been issued back in January. So I went over to an English-speaking part of the city (I live in Montréal) and deposited the check in an ATM.

That incident got me worried for a while about what would happen, a few years down the road, when my paycheck would be dated “03/01/02”: that bank teller could see that date in three different ways.

But when the Y2K frenzy came along I felt reassured. After all, if companies and governments were spending billions fixing the year 2000 problem, surely they would fix this issue in the process by using four digits for all dates, and spelling out the months whenever necessary. Or just mandating ISO 8601 all over the place.

The first sign that it wasn’t being taken care of was when I received a MasterCard that expired in February 2001: the date was embossed as “02/01”!

It amazes me to see that companies, governments and ordinary people are still spewing out dates like 04/05/08 or 4/5/8, even on binding contracts, even on Y2K-proof computers, and even in pseudo-bilingual countries like Canada. Surely there’s a cost to all this confusion.

At the hotel I worked at in 1999, my system did hang while running the audit that night. I called software support and they did a fix.

It isn’t really personal computers that were the issue. It’s big business. In the business world, one of the most popular programming languages to use is COBOL, one of the first programming languages ever invented back in 1959. Anyone who has looked at COBOL and more modern languages knows … COBOL is some hideously archaic stuff. It’s only in use because there are so many really old programs using it deep in the bowels of so many big businesses.

All these really old programs were what really needed to be updated for Y2K. Any industry that automated things decades ago? Ten bucks says most of it was done in COBOL, and most of the code that was written said decades ago is still being used. Often, these systems are so old, large parts are complete mysteries because none of the original people who worked on it are there anymore.

Where I work, record retention times need something like a vice president’s approval to be marked ‘retain forever’ … because there are so many records from the 60’s that are marked ‘retain forever’, and no one knows what they’re for, or who owns them, or what they do, or if they even do anything at all anymore (the system they used to be part of may have been shut down 20 years ago). Nobody says “yes, I authorize deleting this junk” because if you don’t know who’s using it, and you just delete it, it could break anything anywhere. The alternative, is spend enormous sums of money to get every currently used system in the whole company to search their code and say “no, we don’t need this” before you can delete anything.

So these really old systems just lie there for decades. Because they just work, even if no one remembers exactly how anymore. And many of these systems are the ones that had Y2K issues, and they are responsible for things like billing at the utility companies.

Now, a quick detail: If a computer is not expecting to get a negative number when it does math, and it gets one anyway, then it will think it’s a very large positive number. Hypothetically, suppose a Y2K bug was missed in … let’s say … Com Ed’s billing system. On Jan. 1, 2000, it checks everyone’s outstanding balance. It wants to count the number of days between your last payment and today. If it thinks the year is now 1900, then it’s been -100 years since your last payment; the machine might interpret this as the account being in arrears for (very large number) of days, and initiate the process of shutting off the power. Potentially, for every account holder.

Pretty much every large company has dozens, possibly hundreds, of such systems deep inside them, running all the wonderfully automated systems in place today. Every single one of these systems had to be checked, and (anecdotally I am told) most of them needed to be updated.

As a developer who helped fixed some stuff and knew what the situation was, I don’t think the money spent was wasted in general, though some could have been. However, when 2000 rolled around, the hysteria, fed by the media, was unnecessary. The problem had been fixed by then but the media didn’t seem to want to acknowledge that, probably in order to have a story to cover and/or because they didn’t understand the technical aspects. So necessary to fix and unnecessary hysteria at the same time.