Are you kidding? I work with big data systems and data integration as well, and their back-end requirements would scare the crap out of me. Have you actually looked at the architecture of the back end? Their ‘hub’ has to connect to literally dozens of legacy systems, some of which are antiquated. The records between them all have to be normalized. They also have to connect to numerous insurance providers, and many of these providers won’t even have fields in their own databases for some of the necessary data. Not only that, but a lot of this has to be done transactionally, with the ability to unwind a transaction if any of the data writes to these legacy systems fails. Some of those systems may not even have interfaces capable of handling transactions.
I have worked on projects that were delayed by a year or more because of integration problems with a single legacy database. For example, one project called for consolidating two customer databases. One of them had four fields for the address, and the other had three. Trying to match addresses across them resulted in a large binder full of business rules which had to be coded and tested, and in the end we had something like 50,000 addresses that were flagged as untranslatable and humans had to be hired to manually consolidate them.
This is the stuff that’s completely opaque to high level architects, and usually isn’t discovered until the detail design and coding is underway when it’s really expensive to change.
Then there’s the issue of bugs - both on the new system and in the legacy systems. Sometimes adding a new interface to a legacy system will expose bugs that were not found before. Transactional bugs between multiple systems can be a bitch to find and very difficult to fix if the problem is in some old legacy system that no one really understands any more. We’re already hearing about these kinds of bugs - insurance applications missing data, cancellation transactions being received before the application even arrives, that sort of thing.
We’re talking about dozens or maybe hundreds of different systems here, and the interface spec is a hell of a lot more detailed than a simple address translation.
The amazing thing is that there was no one put in charge of the ‘big picture’ here - they just published an interface, handed it off to all the various stakeholders, and assumed that everything would just work. Not only that, they apparently didn’t even start integration testing until a month before the rollout. That’s insane.
Back end integration problems with heterogenous legacy systems is probably the #1 killer of big digitization projects. That’s what killed the $700 million FBI digitization project. It’s what killed the California DMV’s license consolidation project in the 1990’s. It’s what caused Canada’s long gun registry to go 1000X over budget.
We don’t have a lot of visibility into the back-end code or the problems they’re having, as it’s not user inspectable like the front end code was. But if the quality of the front end code is anything to go by, I wouldn’t take any bets on when this system is truly ready to go live. One possible answer is ‘never’. Plenty of other big IT projects have spun out of control and had to be cancelled.
Too bad the government is ‘all in’ on this and doesn’t appear to have any kind of mitigation strategy if they can’t make it work. It never should have gone live when it did, and now it’s politically risky to do the ‘right’ thing. They should have had a phased rollout strategy to avoid having problems affect the whole country. They should have had mitigation plans in case of failure or large delays.
I have no idea what happens if they can’t get this system working by December, because cancellation notices are already going out on current policies that don’t meet the Obamacare requirements. I don’t know what happens if several million people lose their insurance and the systems aren’t in place to allow them to buy the new insurance. My suspicion is that we’re about to see the emergency hiring of thousands of people to attempt to manually control the flow of applications.
One more thing: As you know, there are some big killers of IT projects: Poor feasibility analysis at the start, poor requirements at the start, and ‘scope creep’. This project has all three of those, in spades. The 2,000 pages of Obamacare turned into tens of thousands of pages of detailed requirements without any of those authors having a clue as to how hard it would be to implement them. The requirements have been changing constantly throughout this project. The Obama administration held on to the last round of requirement changes until after the election to avoid having a debate about them. Those requirements then had to be turned into IT requirements and architected, which means the actual development teams didn’t get their full requirements until just a few months ago. For a project of this scope, that’s crazy.