Was the Y2K crisis overstated?

Luckily John Titor traveled backwards in time to help us resolve the issue. Of course he did such a good job that the problem is now considered overblown and will result in a nothing burger.

I’d say it was a nothing burger exactly because it was overblown. So it being overblown was an outstanding success, really the best case scenario.

It did cause some massive overreaction, though.

At the time I was working with a product that incorporated a simple time clock. The display on the product would display the time, day of month, month and day of week……10:30 AM Fri May 12, for example.

This product did not know what year it was. There was no data input for the year. The only reason it knew the day of week was that you selected it when you set the date and time.

I honestly thought the above explanation would be good enough for my customers, who were generally intelligent with good conceptual reasoning skills. But I was wrong, and no amount of explanation would convince them that their systems were not going to crash at midnight on NYE.

Finally, the manufacturer did extensive testing in order to confirm that their time clock that didn’t know what year it was was not going to crash when the year rolled over, and issued some sort of multi-page white paper that was a waste of time and money, but it cooled the hysteria.

I used to work for a large IT consultancy, and one of our
US cousins devised some tools to scour through huge COBOL
source code repositories to find and track dates, and to
amend the code to cope with Y2K. We set up several centers to
securely hold customers’ code and “renovate” it to be Y2K compliant.
The centers were called “Application Renovation Centers”.
A highly contrived acronym … the guy who devise the tools
was called Noah.
Most of the renovations worked by adding code to check whether
the year part of a date (YY) was before or after a specified
year, and assuming it was 19YY if after, and 20YY if before.
Most of these will have failed by now, but i got
out of that game before it happened !
…and iirc, ALL of our clients were in Banking/Insurance.

I worked on a DEC system which used the number of days since 1970 in an
integer field. Its “Y2K” was 2002 !

If only Honda could fix the calendar in my CRV …

I don’t like it when people try to somehow pin this on Admiral Hopper…

The problem is that these Nobel-Prize winners were not sacked and the code fixed properly.

Sounds generally wrong to have a cutoff fail to float relative to current.

But there could be exceptions, such as the purchase date for a product not to be sold after a fixed date.

This stuff was not super-complicated, but if you didn’t know what you are doing, you could indeed create a new cliff. A lot was normal periodic software mainteance.

I remember reading an article which mentioned an anecdote related by a doctor. The article as about how much waste and grift was associated with Y2k. He was in a discussion with a patient, and the patient suddenly started laughing, which the doctor thought was odd, since what they were discussing wasn’t funny. The doctor asked that patient what he was laughing at, and he pointed to a sticker on the side of the X-ray light box on the wall, saying that it was Y2k compliant. The doctor said “it’s a box with a light bulb inside of it, of course it is."

But was it just the box with a light in it that was being referred to, or the whole x-ray machine?

Just the box.

I think both. It was overstated, but it could have been really bad- just not a “Disaster of Biblical Proportions”. :scream: However, people got on it, and it was fixed.

I ran the Y2K program for the cable television MSO/entertainment conglomerate I worked for. We spent about $160M overall on remediation, half of which was to accelerate replacement of systems we would have done anyway. A lot of it was for consultants (KPMG, primarily, mostly coming off of banking-related remediation projects) and disaster recovery/contingency plans. (One of the consultants smoked “$100 cigarettes” he took 15 minute breaks at $400/hr.)

I agree with the comments that it was both overblown in terms of real risk (I spent a lot of time with the regulators, who feared what would happen if there was no TV to get news from) and a non-event due to all the work that was put in. A lot of that was because it was seen as a major liability concern - much of my work was non-technical, dealing with the attorneys and risk management teams.

Our initial step was an audit of “anything with a plug” (like the x-ray machine), and all the software we used. There is a lot of that stuff in cable headends, movie theaters, arenas, etc. We definitely wasted money on things like a comprehensive test of Microsoft Word (I contended that was not necessary as they had done it themselves, and the impact of changing a few dates in a document was low, but the ball was rolling).

We had a huge team onsite on the big night, and had rented Y2K-compliant Satellite phones for the executives in case of telecom system collapse. We also rented laptops (not common in that era) in the event we needed to “bug out” to a mobile command center we’d established.

Someone took a video in the main command center of the countdown - the laptops all crapped out at midnight, but there were no other real problems reported, other than a chryron machine in a public access studio in one of our franchises.

I like to think that was because of all the hard work we put in - we absolutely found and fixed problems, but it was definitely overblown.

Some folks don’t remember that there were a few other “hazardous” dates after 1/1/00 - I remember sitting in the office alone on the night of Feb 28, 2000, as there was concern that having leap year that year would mess things up somehow. For years afterward, our Purchase Orders had language with my email in it, for vendors who needed to tell us if their products were compliant.

TL/dr - overstated in the extent to which it would be a problem, but luckily we never found out because we took it seriously.

As an aside, that assignment lead to my career in Program Management, which has worked out pretty well.

Were you absolutely positive that you could find, fix and test Y2K fixes in only a few days? Enough to not worry about your company being the focus of press reports on the company who ignored the problem and now old people aren’t getting their money? Hmm.

I spent years convincing chip designers to put extra hardware in so that most defects created during manufacturing would get caught, and I had to do finance models proving how much we’d save. Cost avoidance is tough to show. We often only got leverage and high level directives after a disaster.

I was working in IT in systems development and support at the time - we had to fix a few things, and we were dependent on some fixes from external suppliers, and one weird legacy issue I had to fix for myself, but that was a Y2K+1 issue where a piece of software would accept that it was 100 years since the turn of the last century, but went horribly wrong if it was 101.

There are lots of things that would have gone badly wrong if not patched. I don’t know about things like planes falling out of the sky, but a lot of supply-chain software needs dates to work correctly (like calculating forward orders based on past demand, for example) - i’ve seen reasonably smart systems do really dumb things when they hit a single item of unexpected data, so the capacity is certainly there for them to do lots of dumb things when everything is wrong, because yesterday seems to be 100 years in the future.

It’s tempting to imagine that if inventory control and logistics systems went down, people would somehow manage to keep going with pen and paper until it’s fixed, but there are a lot of co-dependent pieces to it. Unpatched, the Y2K bug would certainly have disrupted food supplies, maybe banking and payments too - and not being paid/not being able to pay has a LOT of potential knock-on effect.

I remember in the earlyish 90’s some people talking about how you should take out a huge loan at the end of 1999, because then the bank’s computers would think that they would owe you 100 years of interest once the odometer rolled over.

I never thought it would work, if nothing else, the humans involved would understand that something was not right, but if it weren’t fixed, it probably would have caused a number of headaches on both sides of the teller window.

Surely if you were the one borrowing, you would owe the bank 100 years of interest…

Edit, oh, I see if time has gone back 100 years. Yeah… I don’t imagine it works like that

canceled by poster

Hard disagree here.

I was managing the IT department at a major regional bank at the time. The organization spent a lot of time and money to first identify potential issues, then fix those shortcomings, and then rigorously test each fix. It was a major pain in the ass, but the result was that we had absolutely no problems when the clock struck midnight.

Now, compare that to what might have happened if our customers hadn’t been able to withdraw funds or access an ATM or pay their mortgage on the first of the month. Yeah, we might have been able to fix it in a few days, but the bank would have lost a tremendous amount of its reputation, not to mention probably losing customers and deposits.

Money well spent, IMO.

Y2K was a little (but not a lot) before my time career-wise, but my understanding of the problem is that the real problem was haunted graveyards – chunks of code that companies relied on that they were too afraid to make changes to.

Were those programs using the “every year starts with ‘19’” shortcut? The answer wasn’t “yes they are, and that’s bad”; It was “the person who can answer that question retired ten years ago, and no one currently here can tell us assuredly that the answer is ‘no’, nor are they likely to be able to update it quickly without breaking something else we rely on”.

Y2K-like situations are counterpoints to the “if it ain’t broke, don’t fix it” philosophy of software management.

Concur. Waiting until everything is on fire, then saying “Hey, how about some fire prevention?” is not a good strategy (although it is a popular management approach when there are people at the top with no basic grasp of risk management).