The industry is a mess; according to the Standish group, in 1995, the US government and business spent approximately $81 billion on cancelled projects, with an extra $59 billion spent on funding overbudget projects. They claim that in the USA, only one sixth of all projects are completed on time and within budget, a third are cancelled outright and the other half are considered “challenged”. Of that half, the average project was around 189% over budget, 222% behind schedule and contained only 61% of the required features.
The fundamental problem with the software industry is that the businesses funding software development aren’t interested in long-term savings unless all the short-term expenses can be recovered pretty much immediately. I don’t know what you might have in mind to address that, but it probably won’t work.
There are other problems above and beyond that one, but addressing them isn’t going to make any difference until you solve the big one.
Well, really, all you’re getting from this report is that the software industry was in a mess in 1994. It probably was, being so fast growing and before the dot-com burst where people finally figured out that software development, like any investment in infrastructure, didn’t break all the rules.
What do you mean by ‘chartering’? And are you actually suggesting mandatory language/technology changes across the board? Like a government regulation? I hope we can all agree that government would certainly not know best.
If large companies with IT departments are throwing money away on ill thought out projects, too bad for them. That money goes somewhere, so it’s not a complete waste to the economy. Software houses with this sort of mentality will lose out to software houses that are well managed.
I predict that this problem will mostly go away (if it hasn’t already). The industry will mature, many more people will actually have extensive experience with working systems. I bet automobiles in 1900 weren’t that hot either.
All in all, any possible enforced solution to this problem is likely to be far more damaging than the problem itself.
Outsource the coding jobs to subcontractors in 3rd World countries where smarter people will work like dogs day and night for a fraction of the cost! Brilliant! What? You say they already…? Oh…rats!
Speaking as a software engineer, I’d say the biggest problem is that planning and estimation of software projects is still largely voodoo. Oftentimes the folks allocating the budget and laying out the schedules for SuperUberProject X have little experience in actually building complex systems, and end up “massaging” numbers until they’re something the client can swallow. By the time the leaders realize that the engineers can’t change the laws of reality and write three million lines of code by next Friday, the choice is for the customer to either push on with more money, or just cut their losses all together.
Granted, this isn’t the only cause of the problem – there are other factors, such as increasingly-complex system applications, the continued desire to use new and untested technologies/languages/methodologies, and the relative immaturity of the software engineering field at this time – but IMO the biggest problem is unrealistic expectations to begin with. Software development at this time is still more of an art form than a discipline, but nobody wants to treat it as such.
My senior year of college in 1972-73 I took a class where a big portion was about the “software crisis” - in other words the software industry (no software engineering then) was a mess. I’d say things are better now but not because of software engineering, but rather because so much work is done today using commercial packages rather than custom code.
My guess as to the reasons for the current crisis is that we still don’t have requirements planning locked down (and never will as long as we think that a set of requirements before a project will not change,) that the planning is faulty (often because of the first problem,) that management is often clueless, and, finally, because disasters are not nearly as photogenic as bridges falling down.
In addition, the malleable nature of software exacerbates this problem. It causes companies to skip over such vital phases as requirements planning and architectural design, or to pay nothing but lip service to these stages. Companies delude themselves into thinking that software bugs can simply be fixed as they are found.
It doesn’t help that computer programming is even taught at the elementary and high school levels. This causes managers to think that computer programming is easy. In a sense, it is; after all, even a young child can develop some fairly impressive software programs. However, developing artful, organized and maintanable code is much more difficult, and the vast majority of programmers never master that particular skill.
I agree with your points and would add the following:
Changing business conditions change requirements. We scrap stuff when this happens.
Individual personalities (e.g. president, ceo, managers, etc.) are a major factor. Even though they appear to buy in to the project, do their actions support it? I have been involved in ERP implementations of the same software with the same team in almost identical companies with wildly different results due to the personalities involved.
If there is a profit motive, meaning you don’t get the money for the contract regardless, only if you succeed, then some projects could be more successful. For example, the IRS has failed 2 or 3 times in it’s re-write, while Intuit created a system that seems to work (granted, it’s probably less complex than the entire IRS set of systems).
If you’re a doctor in the UK, it’s mandatory that you are a member of the BMA. If you’re a lawyer, you must be a member of the Law Society. If you’re a software engineer, why should it not be mandatory that you are registered with the BCS?
To what end? What effect would being a member of a professional organization have on software projects in the real world? This is a non-starter because doctors and lawyers are a much different occupation than software engineers. Besides what is at risk, software has rapidly changing technology and standards, ever improving hardware and expectations, and orders of magnitude more participants. For the most part, a doctor is a doctor, but a web graphic designer is nothing like a database analyst.
I don’t see software as being any better or worse than mechanical engineering, graphic design, advertising, or any number of other professions. You first have to show that more software projects fail to meet expectations than other projects. How do you measure that? Then you need to show that any particular solution will in fact fix the problem without putting a undue brake on the development of new and improved software/applications/technologies.
Another problem is that companies are in a big hurry to get past the estimating and feasibility stage of a software project for accounting reasons - once the project is green-lit and code is being cut, the company can count money spent as an investment. Before that point, it’s an expense.
And there’s a real problem in that management needs to have numbers they can use for planning as early as possible, and therefore demand estimates from the engineers up front, but often you can’t do an estimate until you’ve essentially done the design. I can’t count the number of times I’ve been asked to come up with an estimate for completion in two days for a feature that is going to take weeks to design and months to code. If you haven’t finished designing something, you can’t know how long it will take to code. So, we essentially wind up using second-order approximations like gut feeling, or some rough metric like a function point analysis based on a sketchy outline of what has to be built, or whatever.
And I work for a good software shop. We’ve never had a project fail, and we usually hit our estimates within 10-20%. A while ago we acquired another software company, and their first product for us missed a six-month estimate by two years. We were astounded. After investigating what went wrong, we realized that this company had a culture of just hacking out code. Very little planning was done, requirements changed throughout the development process as people had ‘cool ideas’, large features would be coded without first having a prototype or a performance proof-of-concept, etc. Real seat of the pants stuff.
I think there are just too many software companies that are like this, including some large and successful ones. I have a friend who was a senior engineer for a very large and well-known software company, and he used to tell me horror stories of managers radically changing requirements just before beta releases, or managers actually going into the codebase on a weekend before a release to ‘code up a cool feature I just thought of’.
But even the best and most disciplined software shops have a tough time estimating software. The industry and the tools are still very immature.
My biggest headaches have been requirements. In many cases, the client wants to replace an old system or process, doesn’t understand what the existing system actually does, doesn’t want to be bothered with answering questions that will expose their ignorance, makes major changes to the requirements in the middle of the project, and declares the result a failure because they got what they asked for, not what they needed.
I’m not going to defend the software industry, but the OP’s 1994 cite is inadequate for a debate. In 11 years, software development has surely undergone some evolution, as has the success/failure rate. If the OP wants to discuss how to solve a problem, first define what the problem is today.
Sometimes it is even worse than this; sometimes a company doesn’t know what the code it wrote a few releases ago does. This is partially a result of turnover. If 80% of your developers and PMs leave within four years, there will only be one or two people left who remember that some obscure feature you put in four releases ago still exists.
I agree that complexity is a large part of the problem. But the complexity of the software reflects the complexity of the business or organization. Someone mentioned the IRS; any tax program will have to be as complex as the tax rules it implements. It wouldn’t surprise me if some loopholes in the tax code were discovered as bugs in the software.
If you’re going to force people to join a group, I believe the onus is on you to show why that should be necessary, not on anyone to show why it shouldn’t happen. Who has to pay for this group’s administration? I presume there are ethical and legal standards that these doctors and lawyers need to follow, and the group allows for the policing thereof. You can’t just tell me, “Oh, the solution is to form a group”. You need to suggest what these rules and regulations will be, and how they will be enforced. On that basis one can decide whether these rules and regulations are more beneficial than they are harmful.
I’m a project planner/scheduler. This is what I deal with on a daily basis. Most clients are idiots and demand the supernatural. Not only that, I don’t think any of them understand the concept of freezing requirements/scope in time to ensure that the plan can be feasibly executed in the required timeframe.
At the same time, some software enigineers are not disciplined enough to follow through with the established plans/budgets, and not creative enough to do the “what if’s” needed to ensure that the requirements capture all the necessary functionality. Many folks look at planning disdainfully, a tedious exercise or something.
It’s funny, in interviews/job applications where they ask you “Describe some examples of successful projects you’ve worked on”, all I can come up with is “I tried to tell them, but they didn’t listen”.
:o
Exactly right! That’s one reason why the term “software engineer” is often a misnomer. The vast majority of “software engineers” do a lot of programming, but no real engineering.
From the vantage point of a person who finds the whole “making code for other people” buisness very sordid, Here are some possible reasons
It’s not taught: Despite evidence to suggest that productivity is amazingly uncorrelated with experience, senior programmers still tend to lead teams. These senior are more likely to be unaware of the latest advances in software engineering. Furthermore, while formal qualifications are pretty much mandatory these days, it used to be a lot more relaxed in the old days so many of the senior programmers have not had formal education in SENG.
It’s not taught well: The way University courses are structured is not conducive to learning large scale SENG techniques. Courses typically run for 4 months or less with maybe 10 weeks of time to do a project. Projects this small do not benifit from many of the commonly accepted SENG techniques such as Gantt charts, requirements planning and test cases and such things become a hinderance rather than an effective tool. Graduates may come out of such a course with the impression that such tools are huge wastes of time. Furthermore, some of them actually are wastes of time if not fully understood and used properly. All of this leads to a hostility towards such methods.
It doesn’t matter anyway. Presumably, if SENG methods were effective, then those shops that used them would have a clear competitive advantage over those that didn’t. Shops that didn’t use them would shut down and be bought out by those that did and SENG techniques would be widespread. Since this is not the case, it appears that SENG techniques are not the competitive advantage most people assume them to be.