Are ObamaCare computers being hacked?

The whole system is one giant cluster. Although I have very, very good health insurance through work, I wanted to see the cost of different plans, just to get my own data point on whether the ACA will work as designed. I know exactly what my company pays for my health coverage, so I can use that figure to compare to what I could buy through the exchanges.

Today, for the first time since the system went live, I was actually able to log in! Woooo! I feel like it’s 1994 and it’s the first time I browsed the Web. Then I get this page:

http://img811.imageshack.us/img811/7892/orgn.jpg

Isn’t it the most beautifully rendered web page ever?

This morning, I was actually able to get further and enter my personal information. After entering my vital info and Social Security Number, the system popped up an error saying it couldn’t verify my immigration documents. Interesting! I guess I just found out today I wasn’t born in the United States like I thought all along. After all, the government must always be correct, right?

The whole system is going to come down crashing and burning. If they can’t even get a simple website up that deals with less than 1/100th of the traffic of such sites as Google and Amazon, how are they going to manage the whole healthcare system? Alexa currently shows that healthcare.gov is 298th in terms of traffic of all sites in the US, and 3,689th in the world. If 3688 other websites can deal with more traffic and stay operational, how can’t a nearly 1 billion dollar system?

+1. This is the sort of thing that’s difficult if you don’t know what you’re doing or have inappropriate constraints, but relatively straightforward if you have the experience and the freedom to choose the correct infrastructure. There are plenty of startups who manage to scale their online systems just fine, with minimal resources and time.

Kinda like beta testing Windows Vista years ago.

Are you kidding? I work with big data systems and data integration as well, and their back-end requirements would scare the crap out of me. Have you actually looked at the architecture of the back end? Their ‘hub’ has to connect to literally dozens of legacy systems, some of which are antiquated. The records between them all have to be normalized. They also have to connect to numerous insurance providers, and many of these providers won’t even have fields in their own databases for some of the necessary data. Not only that, but a lot of this has to be done transactionally, with the ability to unwind a transaction if any of the data writes to these legacy systems fails. Some of those systems may not even have interfaces capable of handling transactions.

I have worked on projects that were delayed by a year or more because of integration problems with a single legacy database. For example, one project called for consolidating two customer databases. One of them had four fields for the address, and the other had three. Trying to match addresses across them resulted in a large binder full of business rules which had to be coded and tested, and in the end we had something like 50,000 addresses that were flagged as untranslatable and humans had to be hired to manually consolidate them.

This is the stuff that’s completely opaque to high level architects, and usually isn’t discovered until the detail design and coding is underway when it’s really expensive to change.

Then there’s the issue of bugs - both on the new system and in the legacy systems. Sometimes adding a new interface to a legacy system will expose bugs that were not found before. Transactional bugs between multiple systems can be a bitch to find and very difficult to fix if the problem is in some old legacy system that no one really understands any more. We’re already hearing about these kinds of bugs - insurance applications missing data, cancellation transactions being received before the application even arrives, that sort of thing.

We’re talking about dozens or maybe hundreds of different systems here, and the interface spec is a hell of a lot more detailed than a simple address translation.

The amazing thing is that there was no one put in charge of the ‘big picture’ here - they just published an interface, handed it off to all the various stakeholders, and assumed that everything would just work. Not only that, they apparently didn’t even start integration testing until a month before the rollout. That’s insane.

Back end integration problems with heterogenous legacy systems is probably the #1 killer of big digitization projects. That’s what killed the $700 million FBI digitization project. It’s what killed the California DMV’s license consolidation project in the 1990’s. It’s what caused Canada’s long gun registry to go 1000X over budget.

We don’t have a lot of visibility into the back-end code or the problems they’re having, as it’s not user inspectable like the front end code was. But if the quality of the front end code is anything to go by, I wouldn’t take any bets on when this system is truly ready to go live. One possible answer is ‘never’. Plenty of other big IT projects have spun out of control and had to be cancelled.

Too bad the government is ‘all in’ on this and doesn’t appear to have any kind of mitigation strategy if they can’t make it work. It never should have gone live when it did, and now it’s politically risky to do the ‘right’ thing. They should have had a phased rollout strategy to avoid having problems affect the whole country. They should have had mitigation plans in case of failure or large delays.

I have no idea what happens if they can’t get this system working by December, because cancellation notices are already going out on current policies that don’t meet the Obamacare requirements. I don’t know what happens if several million people lose their insurance and the systems aren’t in place to allow them to buy the new insurance. My suspicion is that we’re about to see the emergency hiring of thousands of people to attempt to manually control the flow of applications.

One more thing: As you know, there are some big killers of IT projects: Poor feasibility analysis at the start, poor requirements at the start, and ‘scope creep’. This project has all three of those, in spades. The 2,000 pages of Obamacare turned into tens of thousands of pages of detailed requirements without any of those authors having a clue as to how hard it would be to implement them. The requirements have been changing constantly throughout this project. The Obama administration held on to the last round of requirement changes until after the election to avoid having a debate about them. Those requirements then had to be turned into IT requirements and architected, which means the actual development teams didn’t get their full requirements until just a few months ago. For a project of this scope, that’s crazy.

Looks like somebody is actually targetting HEALTHCARE.GOV

That link says:

There was also that bungled Denver Airport Baggage Management system that cost umpty-million-rasbuckniks, delayed the opening of the new airport by 16 months, and then only because they decided to run with a stripped-down version in just one terminal that they were able to kinda-sorta get running.

See also: DENVER INTERNATIONAL AIRPORT’S AUTOMATED BAGGAGE SYSTEM FAILURE for more; a google search turns up many other cites too.