You seem remarkably and uncharacteristically reluctant to say that the Y2K money was wasted. You seem to follow the line of thinking that “well, it was better to be safe rather than sorry”.
This approach, however, ignores the “opportunity cost” (as economists would put it) of all this. That is, what else is going un-funded and unexamined if we are throwing all sorts of time and money at Y2K.
Maybe we should have instead spent the money on economic development, fighting malaria or illiteracy, or funding human rights organizations.
And, yes, this applies similarly to other Chicken Little scenarios, including this year’s $800 billion stimulus bill or the fight against global warming. Did these address real or potential problems? Yes. Were/are they the best use of our time and money? Let’s run some cost-benefit analyses before we rush out and spend, spend spend. In particular, the recent Waxman-Markey bill that passed the House is an enormous cost that will have very little effect on global temperatures. What else is going unfunded as a result??
As a programmer who worked for over a year on Y2K fixes, I’d say they weren’t wasted.
Granted, most problems were already covered. (E.g., Oracle already allowed a system variable to be set that would set 2-digit year dates to a specific date range. Ours was set so that 2-digit dates were assumed to be 1/1/1920 - 12/31/2019. Date comparisons would work so that 10/15/05 would be “greater than” {later than} 12/25/96.) But there were the occasional little things, mostly done by previous programmers that didn’t know about Oracle’s system variables, that would act funky after 1/1/2000.
I actually found a Y2100 problem: that year won’t be a leap year, but part of the code simply assumed that all years divisible by 4 were leap years. But Oracle knew better, and would crash the program if a date of 2/29/2100 was used for any date. Now I sincerely doubt that this program will still be utilized in 90 years. But if so, my little notation in the repair log with “AWB” next to it will be my little piece of immortality.
Another egregious example of the Act Now-Think Later paradigm is the Iraq War. We have spent and are spending billions (or is it trillions?) of dollars stopping Iraq’s non-existent Weapons of Mass Destructions program.
Y2K audits of PCs typically found few problems in machines dating from 1997 on. Conceivably those few might have included some critical applications, except for the second factor, which I offer in all seriousness: Windows is so notoriously unreliable that no one would ever build a life-or-death system around it.
Or, to play devil’s advocate, the scare was probably horribly exaggerated, but that doesn’t mean the money spent and the work done was necessarily a waste. Some of the systems that were examined had been in service for decades and probably were sorely in need of some upgrading. And let’s not forget the economy of the late 90s, I may not be an economist, but I don’t think it’s a terrible stretch to imagine that all the jobs that were created in response to this perceived threat wasn’t potentially a contributing factor to the economic state at that time, especially in the technology sector.
So, while I do agree that a cost-benefit analysis is important when trying to justify a “better safe than sorry” approach to the Y2K scare, I think it’s a lot more difficult to do so here because it’s so difficult to isolate the actual threat and benefits.
The Y2K problem required a lot of stuff to be fixed. It wasn’t a big deal. You upgraded software or installed a patch or revised some code or maybe even replaced some hardware/software.
But if all the IT guys in the world had to start at the bottom and explain the Y2K problem to their boss who would have to explain it to his boss who would have to explain it to his boss, etc. nothing would have been fixed.
The great service that the Y2K problem provided was to get management from the top down to tell people to fund and fix the problem.
I don’t think anyone can ever answer the question what would have really happened if no one had done a thing. The fact that nothing major happened doesn’t mean it wasn’t a problem. Due to all the publicity people fixed the problem.
I don’t think it’s something that people who aren’t nerds wouldn’t know about about, so I wouldn’t be surprised if it just didn’t come up in his research or he just figured it wasn’t relevant enough. To be honest, I actually think that one has more potential to be a real issue than the Y2K one but, it’s still almost 30 years away, surely we’ll be off 32-bit architecture by then… right? Oh wait, Y2K was caused by systems older than that…hmm.
I can’t speak to PC stuff but I know there were definite problems in some sytems. I worked for months for 2 different software houses with ERP products on Y2K fixing teams. This was on IBM AS400 boxes(now iseries) coding in RPG400. I had to go in and fix 100’s of programs. Not big changes, often just literally inserting the standard code fix we had come up with, it was quite dull actually.
If we hadn’t however there would have real consequences for those using the product. Some programs would have crashed as they were passed data outside the boundaries they were programmed for. Mostly however the problem would have been simpler but equally frustrating. They just wouldn’t have been able to input their data. They’d put in a date as part of a database enquiry, or a purchase order maybe a search query etc. The, let’s say ,10 year old program would look at the input, check it against valid values and go “Nope,outside the boundaries” and flash an invalid value message onscreen to the frustated user.
And of course this would have happend way before Jan 1st 2000. Anytime someone entered a value with a future date. An advance order, a expiry date etc would have had the same problem.Which is why the companies i was working for the time had to start fixing the problem in 1997, years ahead of 2000.
Two things I remember for the lead up to Y2K:
The first actual Y2K error I remember actually occurred in early 1999. A non Y2K compliant inventory management system started marking everything with an expiration date past 2000 for scrap. They lost a few hundred thousand dollars prior discovering the error. Spending even 50k to fix that in advance would have been worth it.
In 1999 I was a nuclear electronics tech on a US Navy submarine. We had just upgraded to digital systems when the big Y2K panic started to hit and we were told to evaluate our system for Y2K readiness. Some people were worried that system would register an incorrect reactor power level/temperature/pressure/etc. at the moment of turn over and that we had to find some way to forestall this disaster. Everyone seemed intent of imagining scenarios that could result from this. Only one thing. Even though it was based on old Intel processors and used real time clocks for calculations, we never set those clocks. Every time we powered up the system it was probably midnight January 1, 1983. The same was probably true for a lot of embedded systems. They don’t care what the date or time actually is, only how much time has passed.
Bottom line, as Cecil said, there were issues and things that needed to be fixed, but we (as a society) way over reacted. If software developers were thinking ahead in 1997, and everyone else rationally evaluated their risks and what was necessary earlier, it probably should have cost a tenth what it did.
Jonathan
This entire Y2K hysteria was unnecessary from the git-go.
I am not a programmer; I am a mechanical engineer. I had ONE programming class in college, a FORTRAN class in 1977. And in this class, we discussed the Y2K issue. It wasn’t called that, of course, but it was discussed as something one needed to understand when writing a computer program.
If I, a part-time, as-needed dilettante programming neophyte, knew about this issue 23 years in advance, how stupid would a real programmer have had to be to make non-compliant software?
Add to that the fact that real dates are rarely used in programming; any serious programmer would use Julian dates, which are mere integers and do not suffer from Y2K issues in the first place.
Yes, IMHO, all that money was wasted. The programming industry, however, was not motivated to admit that, when so much money was being thrown at the problem. It behooved them to take a “better safe than sorry” attitude, accept the money, and pretend to fix all the problems that never existed at all.
If it was some new dotcom startup’s recent software then surely it had been written to be Y2K compliant from the begining? In which case why couldn’t you just tell your bosses that you were fine and pass the message to the stockholders. I woulf have thought they’d be pleased.
My brief foray into computer programming was entirely in the Y2K area: analyzing, renovating and fixing.
I have no idea how widespread problems actually were, but they were certainly rampant in the systems I was working on. Some of these systems were in active failure in November of 1999.
Certainly people overreacted. The media characterization of the problem was both overstated and completely wrong-headed. But there was a real problem.
As a card-carrying member of the Y2K club, I feel the need to respond. I have been a programmer for 25 years, so I was involved at both ends of the problem: I created code and file structures that truncated dates, then fixed that same code as the turn of the century approached. And before the chorus of “how stupid could you have been to do that” starts up, let me remind people that even as late as the 1980s disk space was incredibly expensive. I remember paying around $63 per megabyte. At that rate one-terabyte of storage would have cost something over $66 MILLION dollars. Today a one-terabyte drive is on sale at Best Buy for $119.99. So of course we did everything possible to save space.
Perhaps some of the problem was exaggerated, but certainly not all of it. The Y2K problem was absolutely real, and it should not be thought of as a over-hyped myth just because nothing bad happened. My experience is limited to software, not embedded devices, so I can’t speak to all the claims about cars not working, medical devices failing, etc. If some of that was exaggerated, I don’t know. In my company I can tell you that our general ledger, point-of-sale, and several other critical systems would have stopped working properly. All of our default date handling logic would have been confused, and any date comparison (of which there are thousands per day in even the smallest company) would have failed. After all, how do you tell a computer that two-digit year “00” is greater than “99”?
If we did nothing, would civilization have ended? No, not really. Would lots and lots of things have gone wrong? Yes definitely. To have ignored the problem, waiting to see what would break, then fix problems as they cropped up would have been both irresponsible and impossible to fix quickly. Imagine a retailer unable to ring up sales because their point-of-sale system is broken for months while the programming staff looked into it.
The real reason “nothing happened” is because everyone in the technology world knew what was coming. There were definite solutions (nothing mysterious about what was needed to be done), and uncounted thousands of programmers around the world worked at it for a couple years ahead of January 1, 2000. By the time the clock ticked over most systems had been patched, tested, and in production for months.