Which software company(s) wrote the Obamacare .gov system?

I find it interesting that in all the brouhaha about the colossal failure of the Obamacare Internet roll-out, the prime contractors have not even been identified, let alone held up to account.

For comparison–this is not a political analogy, just a logical one, so cool your jets–when a civilian plane or, worse, an entire military procurement product series fails, yes, blame is cast on the airline or purchaser, but then the airplane manufacturer or weapons contractor is taken to the woodshed. Loudly, publicly in the media, and in detail.

Leaving aside why that hasn’t been done, which says a lot, I think, about the still-prevalent public opinion that “computers are magic,” who on the technical side screwed up? Obviously managers at vendors and purchasers are blaming each other, but I’m talking vendors.

The CGI Group, some Canadian Information Tech company.
However 8 million people trying to sign on at once will swamp any site.

As far as the software is concerned it is a non-CMS version of Jekyll, developed by this company:

http://developmentseed.org/blog/new-healthcare-gov-is-open-and-cms-free/
Obviously it failed to scale well at first.
However, other parts were designed elsewhere. Kaiser Health has this article from a few months ago:

Kaiser Health News got an early look at Obamacare software that will be deployed in Minnesota, Maryland and the District of Columbia. Connecture is developing the Web interface for consumers under 65 who don’t have employer-based health coverage to shop and sign up for a plan in those states. Connecture isn’t handling the software that qualifies you to buy under the health act or verifies your eligibility for subsidies. Other companies are doing those. Connecture’s piece is the point-of-sale program, the one that steers you through insurance choices and closes the deal.*
Kaiser

Based on a little investigation, it looks like CGI Group inc was given the contract, but I don’t know what exactly that covered.

There is a front end system, and then processing on the back end that pulls from many different existing systems including individual state systems.

Here’s the deal:
Large software projects are hard
Working with govt and contractors (large software/services companies) is also difficult (to find good people)

It all adds up to frequent failure.

And there are many possible points of failure in a software system of this size. Maybe the software is fine but the hardware is woefully undersized. Maybe the contractor specified what hardware was needed, and the budget was cut when it came to purchasing. Maybe the budget for performance testing was cut by the purchaser.

There are many parties involved in something like this and it’s tough from the outside to know where the fault lies. I’m sure there is plenty of finger-pointing happening between them all.

If they’d provisioned it to fit the initial rush, would it not then be overspecified (and overly costly) for normal use?

Yes, that is definitely true. There are ways to mitigate that, for example with hardware leasing at co-locations or dynamic platforms that scale based on demand, but those arrangements are more complicated with the government and add another party and failure point to the mix.

I tried to create an account a month ago when the site first went live, and it didn’t work then either.

This article lists three companies:

"CGI Federal for the website itself,

"Quality Software Systems Inc. (QSSI) for the information “hub” that determines eligibility for programs and provides the data on qualified insurance plans, and

“Booz Allen for enrollment and eligibility technical support.”

They planned for 50,000 to 60,000 simultaneous users. They got 250,000.

250,000 simultaneous users? Piffle. Sounds like a normal day at my employer. IIRC, our Active Directory environment handles three to four million logins and authentications a day.

On a daily basis, Google gets over a billion visits, and on Cyber Monday last year, Amazon sold roughly 27 million things - 306 items sold per second. After that, the tracking systems at UPS and FedEx were able to cope with all of the resulting boxes that Amazon shipped.

Leave it to the government to build a system that can’t cope with relatively low traffic compared to the big boys. At the very least, they probably should have deployed this thing on a platform like Amazon’s EC2 - an “elastic” server environment that can throw more servers into play on the fly and pull them out when demand slacks off.

Comparing the site to Google or Amazon is as useful as comparing it to someone’s personal blog. The only important comparison is what traffic it was designed and spec’d for versus what traffic it actually got. As someone noted in a different thread, if the government built this to support Google-level traffic, everyone would be pissed off about the ridiculous waste of money. (Of course, criticism of the traffic estimates is completely fair.)

And deploying apps on something like EC2 isn’t as simple as taking your existing app and throwing it on the server. It has to be architected in such a way that it can take advantage of the elasticity of the platform.

And yet it wasn’t the government that ‘built’ anything. According to standard free-market slimmed-down principles, it was not a government department that built it, not a privatised one even; the government merely flung money at the problem and hired their friends.

Apart from CGI, some parts were outsourced to a rather unsavory firm called Serco, which may be either British, American or Indian, depending on which time of the day you ask; and dear old Oracle: what could go wrong ?

Plus, other factors may have come into play. Quite obviously, no-one would be petty enough to try and sabotage the implementing of the Act from spite against either it or Mr. Obama; but in the case of the New York local Exchange, purposefully:
*
The head of New York state’s newly launched health insurance exchange said Tuesday that her agency is looking into the cause of surprisingly high web traffic to the exchange’s website.*

“Since its launch, nystateofhealth.ny.gov has gotten approximately 10 million web visits, far more than was anticipated, causing login problems for users,” Donna Frescatore, executive director of NY State of Health, said in a statement. “In response to these issues, operators at the state’s call center have assisted thousands of callers while our technicians have increased the site’s capacity and are looking into the cause of this abnormally high traffic.”

According to Frescatore, more than 9,000 New York business owners and individuals were able to use the website to shop for health insurance on Tuesday. Frescatore did not elaborate about what the cause of the high traffic may have been.

Talking Points
*

Outside IT experts speculated that New York’s astronomical numbers might reflect repeated “refreshing” by users. But it was not clear why that would occur in New York alone. Arkansas had about 16,000 visitors in the same period, and Connecticut’s exchange logged 34,500 visitors by mid-afternoon.*
Fiscal

It will probably be ascribable to something innocent enough, like the alignment of the stars, or bored New York toddlers clicking millions of times on their parents’ keyboards.

It’s Jerry’s day job. It’s why he doesn’t have time to fix the SDMB’s issues.

So it would be crazy for me to think that the government might some day be sensible enough to build its own elastic/cloud platform to support all of the .gov sites? I hear the NSA has an extremely robust datacenter… :smiley:

Oracle did most of the development on Oregon’s exchange site (one of the first under development). It’s important to bear in mind that a major problem for healthcare.gov is that it has to play nice with all the state exchanges, some of which were implemented at the last minute.

I think you’re OK thinking it might be sensible. I think you’ve gone off the deep end if you think the government can actually accomplish it in a time frame where it isn’t out of date by the time they finish.

And nice job, now the NSA is after you.

Plus fires; massive, destructive fires.

*There have been 10 meltdowns in the past 13 months that have prevented the NSA from using computers at its new Utah data-storage center, slated to be the spy agency’s largest, according to project documents reviewed by The Wall Street Journal.

One project official described the electrical troubles—so-called arc fault failures—as “a flash of lightning inside a 2-foot box.” These failures create fiery explosions, melt metal and cause circuits to fail, the official said.*

The causes remain under investigation, and there is disagreement whether proposed fixes will work, according to officials and project documents. One Utah project official said the NSA planned this week to turn on some of its computers there.
*The first arc fault failure at the Utah plant was on Aug. 9, 2012, according to project documents. Since then, the center has had nine more failures, most recently on Sept. 25. Each incident caused as much as $100,000 in damage, according to a project official.

It took six months for investigators to determine the causes of two of the failures. In the months that followed, the contractors employed more than 30 independent experts that conducted 160 tests over 50,000 man-hours, according to project documents. *

WSJ

Terrorists are not actually needed to do the vital sabotage work the Agency is set up to combat.

Ok, but they had plenty of time to design and build this system. They had to know it was going to get extremely heavy usage in the first couple of days, and that frustrated users during that time would be politically damaging (we’d be hearing a lot more mockery of this failure if we weren’t in the middle of The Great Government Failure of 2013). In that situation, I know I would have insisted on a using a dynamically scalable platform (like EC2) from the ground up. This isn’t some startup that’s going to be lucky to get a thousand users total, it’s a government service providing something that the public likely finds vitally important and faces tax penalties if they don’t obtain. I would have made sure that it was scaled all the way up for the launch, and then gradually ramp down as the initial rush subsided.
I’m a professional web developer, but I don’t think I’m specially knowledgeable about this sort of thing compared to other professionals in the field. I’m not sure how they managed to look over the specs for this project and not come to the same conclusion.

I’m related to someone who worked on at least one of the exchanges (as a project/requirements manager); all of my info comes from him. Two points to consider:

Two or three big companies may have gotten the contract(s), but there were numerous subcontractors. It is impossible to point to any specific points of failure; too many companies were involved. The prime contractors will probably wind up getting most of the blame, but whether that’s fair or not is a difficult question to answer.

The government changed its requirements constantly – and that was going on as recently as this summer. It’s hard enough to design and build a system when the requirements never change.