Over the course of the past 25 years, I’ve been involved in the delivery of all sorts of “bespoke” IT solutions. By which I mean usually some one-off project for a specific company to address some specific problem. i.e “we need to rebuild our 20 year old account management system” or “we’re doing this big data migration” or whatever.
I feel like they almost never complete on time or on budget.
They will try different pricing models (fixed time/fixed price, time and materials, etc)
Different project management philosophies (waterfall, agile, XP, PRINCE2, ad hoc whatever)
They will bring in all sorts of outsourcing partners (Deloitte, Cognizant, Accenture, whoever)
Doesn’t matter the technology (mainframe, client/server, web, cloud, no/low-code)
But it seems like, with a few exceptions, more often than not these sort of custom IT projects end up going off the rails. Often significantly.
Has anyone figured out how to actually do them well?
I have been involved, in many different ways, in IT projects for over 25 years. I long ago realised that any industry that routinely dismisses its failings by saying, “Well, more than half of IT projects are over budget, over time or both,” has little or no prospect of reversing that trend.
Unfortunately it seems to me that whatever methodology is chosen, it is simply a way of organising failure.
Truly understand the project, and all its requirements - including the ones the customer didn’t tell you or didn’t know. Changing requirements in the middle of the project is a sure way for it to be late and expensive.
Get the very best people to work on it. 3 crappy programmers are more expensive than one great one, even if the great one pulls down 4x the money of a crappy one.
If 2 is impossible, you’re screwed. If 1 is impossible, try to architect the system to handle changes. Otherwise, you’re also screwed.
For the last project I did before I retired I had meetings with the users to gather requirements. It soon became obvious they had no clue, since they hadn’t done this kind of thing before. So I architected the system to handle change, and it worked. It helped I knew the domain. Now some things I would have bet $1,000 wouldn’t change did. But it was fun.
Completely agree with both of those. What I wanted to add is that for some time there was a sort of religion infecting the upper echelons of IT management whose dogma was that since it’s generally hard or impossible to achieve point #2, a good alternative is a highly structured approach to software development. This religion holds that strict adherence to the protocols of a rigid methodology like CMM (the “Capability Maturity Model”) will guarantee both quality software and the consistent reproducibility of said quality. What utter bullshit!
I call it a “religion” because you may as well decree that software quality can be guaranteed by sprinkling holy water on the requirements document and sacrificing a chicken under a full moon. But I have known large organizations to squander untold millions of dollars in pursuit of this non-existent mirage. The reality is, there is NO SUBSTITUTE for talent: in project management, architecture, system design, or coding. None. The best that methodologies can do, in the final analysis, is give you handy checklists to guide you along. This baseless faith in generic methodologies is much like believing that a pre-flight checklist is a substitute for a skilled pilot. No, it’s a potentially very useful adjunct to a skilled pilot, but of no value without one.
Absolutely, those two points are key. #1 is just as important as #2. Never assume that the client fully understands the requirements. If you’re going to assume anything, simply assume that the client is an idiot, until proven otherwise. There is no easier way to be doomed before you even begin than to have poorly specified requirements. One of the few good things about methodologies like CMM is that they do remind you to get the appropriate level of management buy-in and commitment as part of the project initiation.
You got management buy-in until you don’t. Top management thinks that once people hold requirements meetings and write the requirements documents they know the actual requirements. You’re right about why this is foolish.
I’ve seen fewer cases of management not buying in and supporting projects that were on track than management supporting project which should be killed early. I was briefly on a hardware project that if it had been killed early would have saved the company billions of dollars and stopped an embarrassing failure. Better to keep pretending things are working and hope for a miracle.
I did it a lot. My main secret was to bid on the job, get rejected, then wait however long it took for one or more companies to fail. After that the clients listened to reason about the time and money involved.
It’s also a lot like all the planes that land safely and don’t make the news. You only hear about the failures…
It’s even better when this kind of model is applied to elements of an organization having nothing to do with software. I worked for a company (no names but you’d recognize it with even a passing familiarity with the aerospace industry) which dictated that all divisions and programs would be “CMMI Level 5 compliant”. It was an extraordinary waste of time and effort which did not even end up being a competitive advantage on proposals after the prospective government customers decreed that it would no longer be a factor in proposal evaluations.
I think part of the problem is that these “enterprise wide integrated product and process data management solutions” are so conceptually vast that no one small body of people can really understand all of the requirements and interfaces, and attempts as systems engineering and requirements management end up being window dressing on a puppet show where people at the working level are often making things up as they go just to show some progress. Managing even a modest-sized implementation of this kind of system is a specialty onto itself that also requires discipline knowledge, not just being able to parrot out systems engineering or Agile or whatever jargon. So, of course estimating time and budget is a shot in the dark, and competitive pressure also biases toward underbidding to win the contract.
And, in my experience, any program where you have to bring in “partners” rather than suppliers and subcontractors is guaranteed to spiral out of control as said partners disagree with each other philosophically and on detail issues. No ship is simultaneously commanded by two or more skippers, and no effort to accomplish anything of practical merit should be run by committee.
Sure, that’s the theory. And it seems like common sense. But from my experience:
The customer often doesn’t know all the requirements. They might not know any beyond “replace this”.
The solution is often sold long before those requirements materialize. Usually by salespeople with no interests beyond making the initial sale.
Even once you know the requirements, it can be difficult estimating how long individual tasks take.
Most companies don’t want to pay for “great programmers”. They end up paying for cheap programmers from firms like Cognizant or Wipro
Great programmers might not know anything about what the systems they are building are actually doing. Is it better to bring in a great programmer who is unfamiliar with trading systems or a decent programmer who has built three of them?
Also, requirements gathering costs time and money. Architecting systems properly costs time and money. Project management costs time and money. Incorporating buffers into your plan to account for risk costs time and money. Companies often look at these activities as superfluous, but they are part of the process.
Which also reminds me of something else that almost no one ever thinks of. Client companies tend to work at a different pace than the fast-moving startups and technology consultancies that build their systems. My client is pushing us to start working ASAP. I need them to provide a data dictionary before we can begin. They might have something for us in a week (which means 3 weeks). I might get stuck with a project plan that scopes out 2 weeks of requirements gather and it may take the client a month to review and sign off on those requirements, assuming they don’t make changes.
That’s a big problem. It renders all the people involved into functional idiots. Salespeople just want to make their sale. The client often only know how to do the job they’ve always done for years. They don’t know what the new process should look like. The technology people don’t know the business. Most of the team may come from different outsourcing firms where they are just sort of plugged into the project.
Past couple places I’ve worked, they use the term “partners” but really they are just staff augmentation subcontractors.
I’ve worked on plenty of projects that were either delivered on time, or overdelivered (that is, we had extra time left over so worked on additional features and/or polishing beyond what we promised we would deliver).
In terms of the projects that slipped, the main reason has been the specifications; additional requirements that the customers themselves were not aware of (and this is not the customers’ fault; it’s up to the product manager, product owner and dev team to tease out all these details). Agile, done right, is a good way to reduce the risk and cost of this happening.
The other cause is poor planning in terms of the project schedule. This is a bigger danger in waterfall.
It used to be common to just assume everyone would go on crunch time for a couple of weeks to iron out bugs and that would be sufficient. And it isn’t. But I haven’t seen this for a while.
In fact, while I only see one small slice of the whole industry, I have to say my impression is the opposite of the OP. Software engineering has gotten ever more professional IME. Maybe it is just that people’s expectations have risen too?
I regularly get asked by non-engineer clients and friends to make prototype apps for them as a favor that would take a year or more for a whole development team to implement.
When I was a software engineer for 911 systems, we nearly always delivered custom projects on time and under budget. (In the later years after we were sold to one of the major defense contractors who then chased the multi-million dollar contracts, our little group who continued to work with our established state and local clients continued to be profitable while the multi-million dollar contracts were not and occasionally landed in court.)
The way we did it was a mixture of prototyping and “extreme programming” with no pretense at all of using any methodology. My team had no QA people and no business analysts. Projects were composed of one project manager and one engineer for any project up to $$,$$$, and one project manager and several engineers above that. We knew our client base as we’d worked with most of them for upwards of 30 years, and we also knew their customized systems just as well. The engineers worked directly with the clients and the project managers to identify the work that was needed and then to test and implement it, and then do UAT and post-install support.
However, I suspect that a large part of our success with this was the platform we were on: OpenVMS on Vaxes, Alphas and (later) Integrity servers. There was no “turn it off and on again” to fix things but we knew the systems so well we could debug and fix the code. The platform was also so robust that we had several clients originally implemented on 1970’s Vaxes that we later (in the early 2000’s) upgraded to Integrity servers with only a minor amount of re-coding needed. Basically we swapped the hardware out from under the systems. I don’t think that’s possible with Microsoft or Linux technology although I could be wrong, but it seems like software projects today are a thousand times more complex.
Since then, when I became an IT business analyst, I worked on all variations of Agile and Waterfall methodology projects and none of them went as well as my old work. Some were delivered on time and under budget, but still seemed way more complicated than I would have thought they needed to be. And all definitely required more staff; QA teams, more developers, BAs, and of course one or two PMs to coordinate the mess. Also, which I think is interesting, is that none of them adhered to a “pure” methodology, although a few teams claimed they did. (I.e. they liked to pound their chests and insist they were Agile, when there were aspects of Waterfall included, like having the clients sign off on requirements before development could start.)
As an IT project manager, I brought in a few projects on time, on budget and in scope. And many that weren’t. The one thing in common on the “successful” ones was the support of management up front to take the budget as given as necessary, to not permit scope creep, even a little, and to take a “this is important, you will get it done” note with the project teams. The ones that were unsuccessful started out underfunded as the budget was “negotiated” (we will do this project but for 70% of what you say it will cost), had scope creep (or a poor understanding of the scope when it was budgeted and scheduled), and had a project team torn five different directions for whom this project was one that wasn’t the HIGHEST priority. Many many times the subject matter experts would be telling upper management early on what the problems were going to be and were shut down for a pollyanna view of the project. Or they’d cut entire necessary organizations (like eSecurity - no one like eSecurity, or Vendor Management) out of initial discussions because they knew those organizations would bring up issues that would make the project more complicated.
If you aren’t honest with yourself up front, there is no hope. But I came to believe that corporations don’t want honest up front because they’d never get approval for five years and $30M.
The bigger the project is, of course, the more places you will have variance in your costs and schedule. So its far more possible to deploy small pieces of code in the time you expect with the staff you estimate - and highly unlikely you will be able to migrate to a new ERP system on time and on budget. The small ones are likely to be internal projects. The big ones tend to be the ones consultants are brought in for.
I just quit the IT industry after 20+ years in that field, much of which was project administration and delivery. I agree that IT projects are tricky.
People want the moon on a stick, and they don’t want to pay much for it. They may actually want something that is impossible, or contains contradictory features. People want too many things at once - especially in organisations where project management and delivery is in-house, boss guy may commission a project and set people working on it, but having done that, his mind is now free to think about the next thing he wants, and then impose that over the top of the existing ongoing project.
People change their minds about what they want, and often, they do this in a way that fails to consider the impact on the totality of a project - their change makes perfect sense to them, of course.
The domain into which these projects must deliver is often complex.
Funds, resources, time, materials are finite.
Customer expectations are without limit, and are renewable.
Everyone wants something; nobody wants to give things, or wait and take their turn.
Yeah, the old “iron triangle of project management”. In reality, customers expect high quality, but the dates and times are typically set arbitrarily.
The reality is that from when a project is conceived through when a new application is actually built and put into use (along with the accompanying process updates and change management), there are so many variables and unknowns, it’s almost impossible to accurately forecast.
In many ways small projects are worse than large ones. Delivering a 5 year, $20 million enterprise system, some of the unknows tend to smooth out over the duration of the project. Plus it’s large enough typically to get the attention it needs. And generally a few million over or under is a small error.
A six week project, having a key stakeholder out sick for three days and unable to sign off on requirements can throw the whole thing off schedule.
I’d say that a decent programmer who has implemented the same type of system three times on time and on budget is a great programmer for that system.
The IT departments I’ve known love to gather requirements. They get to produce a deliverable fairly easily, and requirements documents never crash. How good they are is something else.
That the customer doesn’t know their requirements doesn’t mean that the customer is not going to give you requirements. That’s problem one. Problem 2 is that most people have no idea whether a requirement is going to be easy or nearly impossible to implement.
Salesmen can never be trusted. We reviewed several virtual conference systems. The one we selected seemed fine except that we needed to be able to make last minute changes ourselves. The salesman said, no problem. When we talked to the lead developer it turned out that could be done only if we got the password that would let us meddle with every conference these guys ran.
It also goes the other way - as any software project which got told to use six sigma knows. Six sigma trainers without the slightest knowledge of statistics are always fun to listen to. In my experience it all came down to collecting data in surveys.
I can imagine what programmers must think about having to apply DMAIC and DMADV to their programming. The reality is that all of these methods and process are useful within the context they were developed to be used in, and some executive latched onto it as a magic cure-all for every problem and inefficiency. I had to do Six Sigma training (thankfully only to “Green Belt” level) and spent more time showing the instructor how to use Minitab correctly to calculate basic statistics than she spent lecturing the class…and I started without any knowledge whatsoever of Minitab.
Yes, the user surveys. I still don’t understand what that had to do with core Six Sigma methodology, which was essentially to use frequentist analysis to identify the biggest and easiest sources of quality escapes and then apply metrics to see how well the proposed fix improved them. We all had to do Six Sigma projects, and I think the one I was involved in had something to do with reporting facility issues or something similar. We had one guy on our team who was really gung-ho to do all things corporate training so we let him do all the planning (and most of the work) and then gave him a nice writeup in the report about how he was the key player on the team (absolutely true) and how valuable this effort would be to the company (completely false).
I have never used anything I learned in Six Sigma training in actual work…except for Minitab, because our statistician liked using instead of S or R or the Matlab Statistics Toolbox. I guess it is okay; it is certainly better than trying to do complex statistical analysis in Excel, at least.
Similar to what @Dangerosasays above, how much of the over budget and over time is due to deliberate self deception?
What I mean, everybody involved knows from the outset that the schedule and budget are unrealistic, but nobody can say it in a meeting or put it anywhere official. Sometimes this may be because there is no way the necessary project will be approved at the realistic budget, but sunk-costs fallacy be damned, once it is underway a budget overrun will be approved.
Or maybe it’s wishful thinking? In meetings and stuff everybody talks what they want to be true, but in private they talk estimates that are much closer to reality.
Though I do IT stuff, I’m not involved in that world, so I’m asking, does it happen that way?
In the academic and science world, I do know that lots of grants are filled with estimates that everybody involved, the investigators, reviewers, and funders, all know are wrong, but the other choice is to do nothing.
For example, through much of the 00’s genetic studies had sample sizes in the 1k-10k range. Everybody knew that wasn’t enough people to find answers, but the cost of genotyping limited sample sizes. So there was a bit of a shared fiction about effect sizes and such so that any work at all could get done.
I can answer that. Somewhere in the black belts little minds they realized that they needed to get a lot more data than the small scale projects could ever come up with. Surveys of customers generated lots of data and kind of made the customers (internal ones for the training I got) think you cared about what they said. Useless for sure. The software stuff we did was pretty much all one of a kind, and the furthest thing from the application domain as you can get.
I’ve been involved with real manufacturing, which pretty much no one else in the IC companies I moved to had been.
I had a kind of real six sigma project once, and our Black Belt got terribly excited since it was the only one around. And she got really disappointed when I figured out a few weeks later that it wasn’t a good idea.