Irishman, I understand your post, but this statement by CP is still factually incorrect:
You’d have a better proof case if you could cite an organization which spent nothing and crapped out because of it, as opposed to simply fixing anomalies the way every other buggy software gets addressed.
Perhaps a re-reading of Cecil’s take would be helpful, including the estimated dollars spent, for those who think it was not all that big a deal. For those of us in leadership roles making management decisions about how to spend money, it certainly felt like a big deal…
The fiefdom of which I was in charge was a large healthcare enterprise. And we (based on my recommendation) took a very low-level approach. I can assure the media hysteria-mongers, consultants, and other healthcare enterprises put enormous pressure on us to spend more and take it more seriously.
Of course there were scattered pieces of code across the world that needed fixing. Some got fixed prior; some got fixed after. I repeat: that’s NOT Y2K. Y2K, as in the OP title, is referring to the notion of taking that scattered little issue and turning it into a Great Cause. And organizations which bought into Y2K hoopla fared no better than those who did not, on average.
I am sure you can find a cite where someone’s software crapped out because they did not address Y2K issues. But to show that, on average, organizations which spent Y2K millions versus those who did not, you have to look at the opportunity cost lost by diverting budgets into the Y2K hoopla. That’s my point. We had (for instance) healthcare enterprises spending millions on Y2K amelioration. Those millions were dollars not spent on critical issues elsewhere. And those organizations which did not spend an equivalent amount of money did not fare more poorly.
The same line of reasoning can be extended through all industries, with essentially no proof cases that, on average, there was a difference. As I mentioned, Cecil uses Italy for a proof case at a national level.
I’m not demeaning the coder who went in and fixed buggy code. I’m not saying nothing needed fixing ever. I understand the defensiveness of those who consumed a big part of their life promoting Y2K as a Great Cause. But as a Great Cause–a Great Danger–a potential TEOTWAWKI–it was a bust. And a huge percentage of the money, hysteria and ameliorative effort was squandered. And squandered using resources better spent elsewhere.
You stated that there was no difference between those that fixed in advance and those that did not. There clearly was, I gave you 1 concrete simple to understand example. They were required to spend money to both fix the mess created and to fix the software which others did in advance and thus didn’t have to spend extra money to fix the mess created.
You get that, right? They spent extra money to clean up the mess created.
But that is exactly Y2K as I said before. It’s the root cause of the entire thing. Without it there was no Y2K. That’s where the money was spent. Fixing software and hardware with date problems.
I worked as the project manager for the Y2K conversion utility at the ERP company I worked for. Anyone running our software (5,000 customers in the 50million to 2billion size, we were a tier 1 vendor) needed to either upgrade to the Y2K compatible version, or they could use the conversion utility to help fix the source code that needed to be fixed.
How did we know what needed to be fixed? We set the date on our servers to various dates surrounding Y2K and tested everything.
I can assure you the impact to these organizations would have been very, very significant.
If you respond to anything in this post, please respond to this portion:
Let’s just take my MRP example I asked about in a previous post that you didn’t respond to. Are you aware of the impact on the organization if you did not fix something like that well in advance?
What parts should the organization produce?
What raw materials should be ordered?
In what quantities?
What customer orders can be accepted?
How should existing inventory be allocated/prioritized?
Which lots have expired?
I am sure you did a fine job and worthy work. Most organizations–countries even, as Cecil points out–noticed little impact difference whether they had huge advance efforts trying to identify and fix just those sorts of issues, or just muddled through as stuff came up.
But if you just need reassurance your part in the Y2K effort was necessary and appropriate and cost efficient, then let me just congratulate you and let it go. I feel like I’ve made all the points I want to make on this.
Haha, I think it’s beginning to dawn on Chief Pedant that among the many true things he’s said he’s said a bunch of things stupid and untenable. But instead of publicly setting the record right, he’d like to just peacefully wish us goodbye.
Alas, farewell, dear pedant. Hope you won’t say the same things again in the future.
:smack: Or not.
So you think people should have muddled along, fixing business-crippling bugs as they crept up in the early months of 2000? What do you do, again? Oh, you work in healthcare? What a wonderfully well-functioning industry! We should take all our managerial advice from you!
While I don’t agree with the argument that Y2K spending was pretty much a draw (there were way too many consultants who charged WAY too much money to simply apply manufacturer provided BIOS updates), I have to say that one line in the Y2K article *really *caught my eye.
Whoever wrote this article is either not at all in the IT field or is trying to challenge Microsoft to a legal battle.
I am not a Mac lover, a Linux freak, or a Microsoft lackey. However, anyone in the IT industry would know without a moment of doubt that Microsoft’s family of server products are the most widely used platform to host mission-critical operations.
I work in a data center for a globally dominating firm, just in our little corner here in Utah we own more than 200 licenses of Windows Server 2003 (mostly Enterprise or Datacenter) and are in the process of testing Server 2008. Our downtime is less than .5%…almost all of our servers are virtual.
Yes, Linux has it’s security advantages but, compatibility is a major concern.
When it comes to statements like the one in the article, it is simply amateurish and patently untrue.
If you want to whine about Microsoft products please, open an opinion column.
Stating the same thing over and over without any data does not make it correct.
I can provide you (and Cecil for that matter) with specific examples and we can openly calc the cost to “muddle through” vs cost to fix. It’s easy to do and they will be real life examples and they will apply to entire classes of organizations.
I’m not the type to play games, let me know if either you (or Cecil) wants to dive into the details related to the types of organizations I worked with and we can see where the analysis leads.
P.S. Y2K was just one of many projects for me, I have no emotional attachment, but I do have a problem with blatantly incorrect statements.
By the way (even thought I don’t care for this argument), for those of you in here claiming to have “fixed” many software problems with datecode inaccuracies, most of you are lying. GASP
Software queries the current date and time from the OS. Most operating systems were redesigned *not *to use the CMOS time and assuredly used 4-digit representations for the year.
Software designers (at least in Windows) actually have to create *extra *code to provide a two-digit representation for the year. This has been the case since at least Windows 95. However, this in turn does not make the software “forget” what century we are in, the datecode is still the same…it’s just the user who sees a year without the century before it. The OS provided that software with the date in proper format. Had the software not been able to interpret the the 4-digit year, we’d have been having Y2K problems long before Y2K.
As far as hardware goes, any data center or IT department should have been able to identify and apply patches to mission critical hardware without additional assistance, any IT department that couldn’t do that is worthless.
In the 80s, I mentioned to a senior programmer (I was very junior at that time) that all of our COBOL programs would have a problem come 2000. His response? “Those programs won’t be running then. Computers will all be like HAL in 2001 a Space Odyssey.” You’d be surprised how many people held similar opinions back then. As somebody who’s been working as a programmer from then till beyond 2000 I can tell you for a fact that many many programs were written to use 2 digit years, up into the 90s, and many of those programs ended up being used a lot longer than anyone thought they would be. So a lot of them did require remedial work.
The stuff about planes crashing and nuclear meltdowns was nonsense, but a lot of the remedial work done on business software was necessary. I’m not saying there would necessarily have been some sort of business or financial meltdown without it but in January 2000 there may have been a lot of people working overtime to fix a bunch of annoyances and possibly a few business impacting issues. I know that I had to fix a few small things that we missed that popped up in January and even February, and I suspect that there were a few businesses that had to fix some larger issues that January.
You’re talking about Windows. There were a LOT of legacy programs written in things like COBOL and RPG that were still running on non-Windows OS’s.
It’s blatantly false to say that there were a “lot” of legacy programs out there built in COBOL. In the 90’s any company that was competitive at all moved to OS/2, Windows, or Linux.
I’d be hard pressed to find even one company that was using anything older than an AS400.
My mother was still working on COBOL systems when she retired from Lincoln Financial last December. Which was, incidentally, the same company that I was consulting at in 1999 (at the time, they were Jefferson Pilot), working on programs that, as I said, were in active failure in November of 1999.
Okay, the word “lot” is subjective, but there were more than a few out there.
I don’t understand why people have this need to prove that it was a complete non-problem, even to the point of calling people liars. It certainly wasn’t a world ending problem like a few alarmists were predicting, but there was software that needed to be fixed or replaced. I spent a lot of time going over old software in '99. Some of it we fixed, and some of it we abandoned or replaced.
As of 1997 there were approx. 200 billion lines of COBOL running business software.
Servers in the data center are still primarily mainframes, as400, AIX, HP-UX and Solaris. Linux is obviously making great inroads.
Windows dominates as a departmental server, but that’s still the low end.
AS400’s (renamed though) are produced every day. It doesn’t make sense to say “older than an AS400” anymore than it does to say “older than a pc”, which one and when was it manufactured?
Please do tell, which of us are lying and which of us are telling the truth?
-
Many dates in a transactional system are entered by the user because the computer has no idea what date the customer wants the order shipped (for example), but the user does, and they key it in. Querying the OS for the date the user wants the order shipped would really be pointless.
-
Most software, for many decades, was written such that the user would enter the date in MMDDYY (or YYMMDD or DDMMYY) format. The date was typically converted to YYMMDD and stored in the database just like that.
-
Languages had built in facilities (COBOL and RPG) to retrieve the system date, often it was returned in YYMMDD format and was stored in the database just like that.
This is clearly not where the problem was. It was in the business software running on the Mainframe, Unix and AS400 servers where dates were not always handled in the way you describe.
I am not in the group of people who believe that there weren’t any problems to be fixed however, I know that the problem didn’t deserve nearly as much attention as it was given.
The fact that Windows users were being told at all to worry about Y2K was the most appalling fact, even windows 2.11 was able to disseminate between '69, 1969, and 69A.D…
I don’t feel bad for the mega-corporations that enlisted the help of consultants who couldn’t tell the difference between a CPU and a tower. It was their fault for shelling out all that money.
Though, you have to admit…Y2K was the biggest rabbit trick ever pulled…everyone knew it wasn’t anywhere near as big a deal as they said it was but, there wasn’t any money in staying calm, was there?
Raftpeople, you’re really blowing things out of proportion, for example:
There is ABSOLUTELY no way to estimate how much COBOL-developed software was still in use, your estimation, no matter how you justify it, is inherently incorrect.
Secondly, you talk about mainframes like they’re all a bunch of old computers the size of a battleship with vacuum tubes and lights flickering like a tourist sign. I used to work for Unisys and was sent to the assembly and configuration center in Texas to learn more about the mainframes we were selling. Just like any other desktop computer, a mainframe simply requires the BIOS to be updated. The mainframe management software was updated by continued support. If any organization owned a mainframe at the time that did not receive continued software and hardware updates through a contract, the company was doomed to fail anyways. I have never seen a mainframe that did not require maintenance upgrades every 6 months, at the least.
Don’t pretend like you’re some AS400 god, either. The AS400 system is unlike Windows in that it hasn’t changed much outside of virtualization (which wasn’t a far stretch since it was already a terminal system). I’m happy for IBM now that they’ve renamed it to the i-series, doesn’t mean that it’s all that different. Yes there have been changes, anyone in this board wouldn’t be so petty as to go on a rant about my example.
Whether or not the Y2K problem was with desktop computers, that is where the opportunists hit hard, and that is part of the argument. Don’t try to dismiss it.
Lastly, your estimation that dates were commonly in a six-digit format was a pretty grand one, one that is COMPLETELY false. Do you really think that the desktop PC market had a monopoly on accurate date formats??? You’re really in a dream world if you think that’s so.
The reality is that there was some fixing to do, it wasn’t as big as you or any one of the ring-leaders of the Y2K movement played it out to be. Most businesses realized the weakness in utilizing antiquated technology that had such limitations and moved on. Do you REALLY think that database developers like Oracle would go very long before fixing that limitation???
Oh and as a further example of how people blew the entire Y2K situation out of proportion:
When Microsoft moved from an 16-bit OS to a 32-bit OS, it became obvious that every company would have to undergo some serious changes. Was there panic? NO. Did companies hire a conversion consultant? NO!
You know what the funny thing about that is?
A conversion from a 16-bit to 32-bit OS is a hell of a lot more difficult than anything that Y2K presented.
The company I work for is required by contract with Microsoft to convert to Windows 7 and replace all server OS’s with 64-bit versions by next year. Are we in a panic about January 1st, 2010? NO!
That was a gartner estimate in 1997. Their methods may have been good, may have been bad, not sure.
What I do know is that they are probably more accurate than your statement:
“It’s blatantly false to say that there were a “lot” of legacy programs out there built in COBOL.”
I’m not sure where I posted anything that would give you that impression, can you point me to it?
Not sure what this has to do with software and how the software stores dates.
Am I pretending to be an as400 god? I do know quite a bit about them as I did work on them for years.
Not sure what your point is.
This may be correct. I did not personally experience it. However, you used the pc statements to attempt to counter factual statements based on non-pc systems, which did not make sense.
Not my estimation, fact. I worked on systems that stored dates like this for 15 years prior to Y2K. It’s hard to tell if you are serious here because it’s such a commonly known fact (but your statement is pretty entertaining).
No. However, 6 digit (and 4 digit period/year financial periods) were littered throughout business software.
This has nothing to do with what I think, I lived and breathed these systems and I saw it first hand. Once software is written, it can be costly to go back and fix it. Also, it tends to get copied into the new software being created/modified. The problems proliferates quickly.
When your financial software incorrectly calculates finance charges for 30 million customers due to the way it stores dates, is that an easier problem to fix or a harder problem to fix than going from 16 bit to 32 bit pc os?