Yet another occasion has arisen at work where, had someone taken the time to design something properly*, we wouldn’t have to be going back and rebuilding it as we are right now. I am speaking of reusability, here; a concept that, IMO, should be intimately familiar to every developer out there.
But to judge by my coworkers attitudes – most of whom have degrees in the field, whereas I only have my lowly self-larnin’ and experience – my take on it is unusual, to say the least.
So, what say you all? Am I right in thinking that one should build one’s code to be easily adaptable to multiple instances of the same general class of problem? Or should I adopt my coworkers’ stances, which seems to be “Solve each problem independently, it’s faster that way.”
To me, this seems counterintuitive – after all, how much time have you really saved if you or the next schmoe down the line has to go back and do it all over again? – but I’m willing to admit I could be wrong…
By properly, of course, I mean “the way I’d do it”
I just had a long, frustrating meeting about this yesterday. We actually met in order to discuss whether we were going to try to design a generic framework that could then be used to handle the specific, immediate need as well as potential similar things in the future or if we should just deal with what was needed now. Trying to get that concept across was a losing battle as we spent over 90% of the meeting delving into details of the specific problem with me and a couple of others trying to get back on track with the longer-range question.
I always try to push for flexibility but find two major things stand in the way. First, time constraints often mean that something has to be up and running NOW and that means I can’t always take the time to consider the bigger picture and build in the options I might like to. Second, making something more reusable often means making it more generic so it’s more awkward to use than something designed specifically for the task at hand and that means I’ve got some 'splainin to do.
Going for the flexibility is a goal I try to keep in mind and I try to point out cases where doing things a little differently would save us time and money down the road. On the other hand, if I’d had my say and they still want me to cut corners and then re-invent the wheel next month it’s not something I let bother me.
Speaking as a programmer, it’s not always easy to solve “the general case”. Sometimes it’s much faster to write code that solves a specific problem, but it depends on the class of problems, of course.
If this statement was true, I’d say fully 90% of the companies out there would go down the tubes.
In general, I agree with the OP, but in practice, it doesn’t always work like that. I’ve spent a lot of time writing DB-independent database layers for products that never once used anything but the original DB they were implemented with. Likewise, I’ve written lots of generic, flexible classes that never did anything more than the first thing they were written to do.
The rule of thumb I work with now is that I don’t genericize things until the second or third type of that class of problem comes up. At that point, it makes sense; you don’t want to have 3 different implementations of the same kind of problem out there having to be maintained. But for all those times that there really is only one instance of the problem out there, you’ve only spent the time and effort to do one specialized piece of code.
There does need to be a balance. I have coded a few “handle it all” programs that never were called upon to do anything other than their initial spec. Sometimes the business changes before a second application is required. Sometimes the software goes of in a new direction. It is frustrating to look back on energy I expended to be sure that the code was adaptable, only to discover that those efforts were a waste of time.
On the other hand, I would tend to agree that the overwhelming majority of coders are pretty narrow-focussed, suffering from equal measures of NIH syndrome, and “I don’t have the time to do it right” syndrome. I think two attitudes support this.
First, that is a human condition. It is just easier to address the immediate problem than to step back and look at the bigger picture. I have seen the same thing happen in engineering firms where one engineer will keep pushing for commonality or software-based adaptability while all his co-workers mindlessly push through on unique solutions or hard-wired applications. We can see the same thing in social organizations in terms of setting protocols. Sorry. That is life; people is dumb.
Second, the better planner never gets the accolades. The firefighter is a star, rushing into the inferno to save the day, (never mind that the fire started from his own cigarette tossed in the trash basket). The planner’s work is always underappreciated; since it is not constantly breaking, it “must” have been an “easier” task, to begin with. ::: shrug :::
Yeah, it really depends, and you need to strike the right balance in every particular situation. I’ve seen systems that attempted to accommodate all sorts of possible future scenarios only to end up over-architected, more difficult to understand by the client who inherited the code eventually, and worst of all, never stood a chance of being used in any of those theoretical futures. athena’s database layer example is a good one.
Sometimes this happens when systems are designed by system architects who don’t understand the client’s business well enough and then design in a bubble, and all the “able”'s are very deeply ingrained in their style because they’ve been conditioned to believe that that is the most important thing. So sometimes the client only later finds out that you’ve charged them a lot of extra money for your fancy extensible system when they could have told you from day one that from a business point of view, there was no way any of these scenarios could ever come into play, or if they really did, much more would have to change than just the system you just built anyway.
You may be familiar with this philosophy called Extreme Programming, one of the tenets of which is that it’s often better to focus on the current rather than hypothetical futures. They have a point; many environments are very fast moving and change so drastically in such a short period of time that even your extensible system can’t handle the new requirements. So they argue that you’ve wasted your time because you’ll have to redo large parts of the system either way.
Having said all that, when it comes down to just writing code, of course you should try to code for flexibility and employ design patterns whenever possible and reasonable. But this comes with experience and an experienced developer usually doesn’t take much longer to write it this way than the “bad” way. If it will take him an extra three weeks, though, then that’s when a judgment call is in order.
It depends on the situation. When I had to integrate a fancy, color-coding editor into our toolset, I did it in such a way that I could easily replace the editor we initially chose with a different editor in the future. But when I integrated an image-rendering library I made no such accommodation. In each case I weighed how likely we were to ever replace the component, how much effort it would be to design and implement an abstraction layer, and many other factors.
So far, my choices have been vindicated, as we very nearly chose to replace the editor component (and it would have been very easy to do) but we have never even thought about replacing the image library. The next generation of our product will be based on a platform that includes its own image rendering, so any effort in that area would have been wasted. Reusability is a noble goal, but it’s not always worth the effort.
I don’t know Athena’s specific scenario, but I would bet she was told they’d be supporting multiple platforms, and she coded it that way, but then the company changed course for some reason. Or had a bad sales force, or had poor planning in upper management, etc. So that would just be an example of bad planning. If the plan was to support multiple platforms and it was NOT coded that way, and they went ahead with their plans, then there would be trouble!!!
On the other hand, if the plan was to just support one platform (and it was a reasonable and justified plan) then to design for multiple platforms would be a waste.
This is true. I recently got quite a few accolades from a client for fixing a potentially large disaster that was directly a result of poor planning on my part. But, they didn’t see that, they just saw me switch into high-gear to solve it.
Heh. I’ve never worked anywhere where those-who-are-in-charge were qualified to make that decision. Nope, that one goes down to the coders, who tend to go back to the old programming fact that One Should Always Plan On Changing DBs. As I’ve been the “One” who does the work on, lessee, at least three Very Large Projects, I can say that the number of times that I actually have seen the DB change is… never. And if they had wanted to change it, I can say with authority that It Won’t Work. Once you get away from the basics, you can’t expect a DB layer optimized for, say, SQL Server to just seamlessly work with Oracle, unless you don’t do anything but the most basic queries. And if you limit yourself to the most basic queries, your DB access time is going to be horrendous.
So it comes back to my original statement: for the first go-round, if you’re not, say, 90% sure you’re going to need whatever you’re writing to be generic, don’t make it generic. If it does turn out to need to be generic, take the time on the second iteration to make it generic. It’s usually not to big of a deal to do at that time, and you’ve more or less hedged your bets.
On the other hand, good coding practices are always necessary and they typically don’t take much longer than bad coding practices. Don’t use magic numbers. Don’t embed strings in your code if you can help it. Group things like DB access into logical components, don’t string them throughout your code. If it makes sense to make something into it’s own class (or library, or assembly, or whatever the appropriate container is called in your world), do it. Add comments to your code. Use variable names that mean something. If there’s a simple way to do it, do it that way; don’t obfuscate just because you can.
I don’t write code myself, but I maintain databases for the websites of two mail-order companies, and I’ve seen this over and over.
For example, TPTB decided that our site needed coupon functionality. The company had used coupons of types A and B in the past. Our IT team built a solution to support types A, B, C, D, E and F. It required a dozen new fields in my database and a forty-page manual. I admit that they did start occasionally using type C afterward (which used two of my database’s new fields), but they also wanted to use types G and H, which weren’t supported.
When they later considered using type D, I would honestly tell them that it hadn’t been tried in years, and changes in the meantime could have broken it, so it should be tested thoroughly before telling customers about it. That usually was enough to discourage them.
You worked for a company with a truly brilliant management staff. Most places I’ve been, they’d have told marketing to start advertising, worked up the promotional material to give to the sales force, and when the first order was (attempted to) be placed, sent a snail-mail inter-office memo to the manager of the wrong IT department “mentioning” that they were starting up the new process.