Help me become a better programmer...

I have been called a good programmer. It would be more accurate to say that I have been told that I “have more potential than anyone I’ve ever met” (the ‘I’ here being my current manager/old manager/two different instructors in college). However, I feel like I’ve stagnated and want to start growing as a developer again.
At the moment, my greatest strength is also my greatest weakness. I get trapped in ‘analysis paralysis’. While I can foresee issues and adjust designs accordingly, I don’t know when to stop. For example, I posted here a few years ago asking for physics information on racing cars. The idea was to design and write a racing simulator that my friends and I could play. No graphics, just feed in the inputs and the race results would be spit out. I continued designing until the point of requiring information for molecular compounds in order to construct the tyres, engines, etc. What started as a sketch of a cube moving in a straight line ended with me reviewing the periodic table. Somewhere I went too far.

How do programmers know when the design is ‘complete’, or when it is detailed enough to get started? To date, I have numerous projects stuck at the design stage and many more in my mind but I can’t bother to sketch out until I know that I can produce some substance.

Another problem is that I’m scared (or something like that) I may have to go back and rewrite a class because of something I’ve missed. A change later could result in a massive rewrite.

I read development blogs and articles online but have not come across anything that really addresses this issue. Any suggestions or advice is appreciated.

This, unfortunately, is part of the “art” of software development. The usual approach for a commercial application is to first define the requirements–what does the customer/user actually need? This can be difficult, particularly with a naive or inarticulate customer. Once you have well-defined requirements then you do enough design to know that your requirements can be met. If you care about costs, then you won’t go an inch farther than that.

The classic way to do this is the “waterfall approach” in which you define all the requirements up front, then do all the design, then all the implementation, then you’re done. This approach has been very popular with large federal agencies such as DOD. But you often have discussions at the end of the project like, “Yes, I know it meets the requirements as written but it still isn’t what we really wanted.”

In the case you described, you were doing requirements and design all at the same time. You didn’t have a problem with knowing when to stop design–your problem was in defining the *requirements * of how precise your simluation needed to be.

In a case like that, an iterative approach is best. You start with requirements that describe a cube moving forward in a straight line, then design and implement it. Then you go back to your requirements and ask, “Does this really do what I want, or do I need to refine my requirements?” Maybe next you add wheels and turns. If you get through three iterations, that should usually be enough. Google on “agile methodology” and “Rational Unified Process.”

Things are different if you are doing some sort of R&D project, where you don’t really have a handle on requirements at all, but you can just do more iterations, which might go off on tangents, all depending on how much time & money you have for R&D.

(IANAL and IANAD but I am a software development manager with 28 years of experience in the industry.)

The best programmers I work with are the ones who aren’t afraid of those massive rewrites. The product I work on is over 25 years old. It’s easy to say, “We can’t make that change because it’s too hard/risky/whatever”. But we have people who are willing to take that risk. We’ve rewritten massive chunks of the code when that was the best way to get something done. We could take the safe way and tack things on here and there, but generally the best way to do something is the way you would have done it if you had a blank slate. Don’t be afraid of the rewrites. I doubt there is much good software that hasn’t been rewritten at least a couple of times.

CookingWithGas nailed it. Figure out precisely what you need to do, and then design the simplest system possible that will do it.

The alternative to the waterfall model is the helix model, where you release little chunks that can be examined either by you or the users. This reduces the need for massive rewrites, though you are likely to go back one step.

As for rewrites, someone proposed the flapjack model of software development - you always throw the first one away. You often don’t know what you need to do until you do it wrong.

But the biggest secret, which can’t be taught, is how to get the architecture into your head. That will guide development. This takes some time, and makes it seem like you’re not getting anywhere. But my motto is “code in haste, debug at leisure.”

ultrafilter’s advice is fine, except you seldom know precisely what you need to do until you do it wrong, and you don’t know how to design the simplest system until you’ve designed a more complex one.

So to paraphrase CookingWithGas, it’s best to grow the application until it fulfills the requirements.

I’ve read a lot on Agile but it looks like I’ve never taken it to heart. I’ll stop reading for a bit and actually start writing something. That may be the way to break this cycle.

Thanks.

Any other suggestions? I’m presently learning Ruby and re-connecting with Perl. I found a list of 99 programming challenges for Lisp but other languages can be used.

I would not recommend agile development. Writing code with the intention of throwing it away strikes me as rather being a waste of your time. I’m not saying that throwing code away is necessarily a bad thing – people don’t do it often enough – but at least try to get it right the first time around. Not to mention, the better your code is on the initial iteration, the easier it is to rewrite code when it becomes clear that the code is inadequate.

I agree with CookingWithGas that iterative development is the solution to analysis paralysis, but I believe that agile takes iterative development too far.

Agile has quite the bandwagon so it’s nice to read a differing opinion Rysto. Do you have any links to a more detailed discussion as to the shortcomings of Agile development?

Thanks.

I’m afraid I don’t have any links. I can give you a feel for some of the problems that can crop up(and let’s hope I don’t end up conflating Agile with Extreme Programming)

Agile requires a very short development cycle. This is not appropriate for a lot of projects. Half of a feature is not always useful to the customer, and if you’re delivered half of a feature to the customer, you’ve probably committed yourself to maintaining that half-feature for all-time, even if you discover while completing the entire feature that the half-feature was ill-advised. Your development methodology can be as agile as you like; you can’t assume the same about your customers. If you can’t assume that your customers can deal with rapid change, then you need to limit your release schedule to protect your customers from change.

Another problem with a short development cycle is that it’s riskier. The faster your development cycle, the faster your code goes into production, and the higher the chances that a bug will sneak through your internal testing and only show up in the wild. If your customers are unwilling to face this risk then your development cycles must be longer.

Agile development assumes that requirements will change often and change can happen at any point in the development cycle. This is more true when you’re developing a custom application for a single customer. If you’re developing a single product for a lot of customers then it’s likely that no single customer will have very much of a say in the requirements, which means that your requirements can evolve slowly over time. Without the need to deal with rapid change in requirements I don’t see a lot of value in the Agile approach. This is one example of a situation in which delivering half of a feature might not be that useful: if one customer wants feature A, one customer wants feature B and another wants feature C, you’re much better off to find a general solution that subsumes all three features than to implement all three separately: that way, when yet another customer needs to solve a similar problem, you can point them to your general solution rather than coding a new custom solution. Again, when there’s only one stakeholder involved and it’s a custom system, feature requirements can be much more specific and providing the general solution might well be a case of overengineering.

My biggest gripe with Agile, though, is definitely the idea of that you should intend to throw your first iteration away. Let’s set aside the enormous pressure there is to release any working piece of code. The real problem that I have with this attitude is that it’s completely backwards. The moment that you think “this code needs to be re-written”, you probably need to re-write that code immediately. If the code is a contained within a single module, you might get away with leaving it there. But if the part that needs to be re-written extends to that module’s interface, the code has to be re-written immediately, because the longer that the bad interface is there, the more code that will be written against that interface, and the more painful the eventual re-writing effort will be. It is very, very easy to get yourself married to a kludge, and then you’re really in trouble. You can’t do a re-write because you have too much time and effort invested in the kludge.

And so if you’re starting to code with the thought that you’re just going to re-write the code anyway, you’re already at the point where you know that the code is going to be re-written, and any further effort is going to be wasted. Sit down, take some time and design it ahead of time. If you get stuck in analysis paralysis, de-scope until you can handle your requirements, and put together a design with an aim towards extensibility, and then code it up. Once that’s done, you can start adding more requirements back into the system. If you discover that any part of your design was flawed then discard that part and re-write it, but do your best to make those re-writes as painless as possible by designing things up-front. I really believe that much of your coding should be an almost mechanical process. Be prepared at all times to re-visit your design, and there are times when you have to test your design by coding it and seeing how it fits together. But if you don’t have a design in mind as you code, the result will be a mess.

Personally, (and I’ve been told I’m the best programmer they’ve ever met by several of my managers) I’d recommend working back from a “Due By” date, rather than a set of technical requirements. Any project can always be improved based on the amount of time and the number of coders you have, so simply listing off things that “should be in there” isn’t really all that meaningfull. A home-made OS “should have” 32-bit protected mode programming, it should have a couple of driver interfaces, it should support a couple of different file systems, it should have 32-bit graphics, etc. But once you say, “But I have on programmer and two weeks to do it”, all those should haves becomes “It has to be able to load a 16-bit program burnt onto a CD, and run it.”, and nothing more.

Technical requirements need to first be based on the physical realities of your manpower and the amount of time reasonable to work on the project.

And if you decide that the project is only worth two weeks of your time, but that anything you can accomplish in two weeks time isn’t enough for what you need–well then just plain off don’t start. I’ve shot down several projects at the companies I’ve worked for, going into detail on why the time-benefit ratio didn’t justify a project, and while that was never popular, in the end we all had to grudgingly admit that it was the right thing to do. And in return, the projects we did work on, we had more manpower to dedicate, and the return was larger.

Have you considered the possibility that it is best for you not to try to learn when to stop yourself, opting rather to let others be the ones to let you know when you’ve gone far enough?

Trying to instill the skill in yourself could just result in unnecessary inhibitions. Why risk unnecessary inhibitions when you could instead opt for the possibility of appropriate prohibitions?

-FrL-

call it Nietzschean Development

This is great if the due by date is real. All too often, the due date gets pulled in to make a manager look better in the eyes of his manager. The project either then slips, or features that could be in get pulled out. Hardware project work the same way, by the way.

When I was managing software development, I’d poll the developers on how long they expected to take, then doubled it and added two weeks. Programmers are usually optimists, and often forget time killers like meetings.

I got out of doing software long before Agile, but my impression on reading about it is that the subreleases are not meant for production. They are delivered to the customer, but the customer really shouldn’t build an infrastructure around it. Putting something into a production flow takes a lot of time, and no customer is going to want to do that more often than necessary.

If you’ve got multiple customers, you either need to get them all together to agree on features, or get a product manager who has the final say on the feature set, and make him the customer. A good product manager is good at saying no, usually by saying “wonderful idea, it will be in the next release.” We developed code internally which later got sold externally, and switching to a product manager model made things go much better.

While I agree that you need to plan things out before you write, as well as possible, I think the reason for the Agile model is the realization that expecting the customer to know what he wants before you begin, or for you to perfectly understand what he wants, is a fantasy. I’ve been an internal customer in a company where the requirements were laid out in great detail, but I still would have preferred to play with an alpha of the tool. I’m not nearly clever enough to figure out all the implications of the document, and I was both a programmer and a subject matter expert.

Unrealistic schedules didn’t generally last long on projects I was involved with. I’m not real shy or political when it comes to earning my paycheck.

Of course, often enough I argued for quartering the scheduled time if it was just me on the project, which probably helped given that I always met the dates I set. And I also double the expected time, due to meetings, encounterig unexpected issues, etc.

When you are working on your own code, you value elegance. When you are working on someone else’s code, you value maintainability.

A system that can’t be changed can’t be implemented. And for heaven’s sake, document. Six months from now, you won’t remember why it does it that way either.

Regards,
Shodan

Read Program Development by Stepwise Refinement, by Niklaus Wirth (developer of Algol-W, Pascal, Modula, Oberon, and other languages, and winner of the Turing Award). A summary version is available online in the ACM Proceedings: http://www.cs.unca.edu/~brownsmi/0408_Fall/csci431/resources/generalReference/PgmDvmtByIterative.htm

Or just look at the title, and think about it.

The basic premise is to start with a basic idea, and an overall view of the program, develop that (starting at a high level), and then progressively improve each procedure within it. Most of the time you should have a working program (at least partially – in the beginning, many of your procs may be just stubs to be filled in later).

By interspersing design of logic & data structures with actual coding of the application, people tend to avoid both ‘analysis paralysis’ and ‘specification creep’ that are frequent problems with systems.

Psssh, when electron shells come into play, then you’ll know you’ve gone too far.

4 days later:
“Guys… I think I just created a perfect working simulation of the universe. By the way, the gravitational constant is a bit off. Also, somehow there’s a fully sapient robot in my kitchen.”

To be honest I do similar things. I eventually learned to do the following things:

Make the most basic model possible. It doesn’t matter what you need to do in between, boil it down to the simplest thing possible. If it’s “input a series of data in a user friendly interface and combine certain terms by blah blah sort it out by blah blah and after that output the results”

Look for keywords
boom- INPUT
boom - INTERFACE
boom - COMBINE
boom - SORT
boom - OUTPUT

Now for your simple archetecture I’d advise eliminating the middle step so you have something without needing to worry about what method you’ll use to sort and such. So you have input and output. This should take all of 5 seconds to make. Save this instance (so you can go back if you have trouble, or have a working step to show people that are interested).

Then get a working part that allows multiple instances of data to be put in. Then etc…

I don’t think I need to outline the other instances as I’m sure you’re smart enough to get the point.

This worked for me because:

When someone interested (i.e. teacher) came around unlike other kids who were working on something that temporarily “broke” the program sicne they needed to modify things, I had something to show.

By focusing on KEYWORDS, and a strict, literal interpretation of them it kept me on a rigid structure. I considered my programs living. This emans if tehy were an animal they needed to know how to “eat” “breathe” and “sleep” before they could “do tricks” and “follow commands.” This means, sure you can add your particles and such to the matter, but have an instance of all the landmarks before it (i.e. assuming the car is a block, then it’s an irregular object, then taking into account independent parts and drag and friction, then taking in… etc) so you can show your friends (or professor or investor or boss etc) what they wanted originally and then can add “but I’m also working on making this more accurate by…”

And though I hate to admit it (it lead me to drop my AP Comp Sci class and persue it independently, if anyone’s curious I’ll explain) I’ve been told I have insane potential as a programmer as well, and somehow managed to be called the most promising since he’s been teaching technology classes (near 30 years). Though I suspect his rating is a little over inflated. :wink: There seems to be a link between insanity (+ insane attention to detail) and programming/design.

So we have two of the best developers in the world on one website - cool :wink:
Maybe the best thing to do is to literally whip something up without working it through (not for work of course), if only to break out of the analysis side of things. Then gradually work back and find a process that I’m comfortable with but also produces something in a reasonable time.
Frylock - I think your post was meant as a joke but I’m not familiar with Nietsche. Sorry. I hope others found it amusing.

Though I’ve met Wirth, and I’m a big fan of both Pascal and stepwise refinement, his own personal code sucks. For my dissertation I modified the Zurich Pascal compiler to be a compiler for the language I created. It took me about 3 months to figure out what all his variables did. I taught Data Structures one term from his “Algorithms + Data Structures = Programs” where all the examples have one or two letter variable names. That was not to save space - the entire compiler was written that way!

The compiler also had the minor problem that, though it was written to be portable, the sets were defined so they only worked on 60 bit machines, like the CDC machine they had in Zurich.

So, while everyone should read the classics of software engineering, don’t follow them slavishly.