Help me become a better programmer...

Oh yeah. I forgot. Thanks for the book reference. I’ll pick it up after I’m done with my present selection.

Well, I believe much of that code was written by Wirth’s students, rather than him personally. And the core part of the compiler was originally intended as an assembler program; it took a while for it to get to the point of being able to compile itself.

Anyway, variable names are simply a matter of style, not code structure. A few minutes with a text editor would fix that.

Very little of my work from a third of a century ago would meet current standards, I’m afraid.

“Build one to throw away” is not agile programming; I believe it’s from Brooks’ Mythical Man Month. Agile programming emphasizes refactoring code between iterations, which sometimes involves throwing stuff away, but more often is just moving code around or other small changes.

Personally, I’ve had good success with test-driven design and test-driven development. Breaking up a design into easily-testable modules gives me a good idea of the amount of work necessary. Writing out unit tests gives me insight into how I ought to implement the internals of the unit.

Don’t build in all the bells and whistles at the first run. Decide on the minimum amount of functionality to have a testable system and work out which units are necessary for that system. Just as premature optimization is the bane of structured programming, premature abstraction is the bane of object-oriented programming.

Bit of a hijack but

:eek: How is that black magic possible?

-Kris

Step 1: Implement in language O a compiler to turn source code in Language N into executable programs.

Step 2: Now write another program to do the same thing, but this time write it in Language N. Compile this with the program from Step 1.

Step 3: There’s no step 3.
(After all, what was the alternative? Turtles all the way down?)

Though, to be fair, to avoid infinite turtle syndrome, even the bootstrapping process (what a lovely and fitting term; evocative in just the right way) which I outlined does require at least the initial kick-off to proceed in a slightly different manner. E.g., at some point in history, someone has to hand-craft executable code implementing the world’s first compiler/interpreter/whatever. If you value your time, you do this for a programming language which is very, very simple. At that point, you can kick off the chain of writing more complex compilers/interpreters/whatever for more complex languages enabling the easier production of more complex compilers/interpreters/whatever for more complex languages enabling…

As I mentioned, the style matched his book, and was consistent throughout the compiler, so I don’t think you can blame students for it. The structure did seem reasonably good. But when you have many, many variables, sprinkled throughout the code, it takes more than a few minutes to fix. I think I did fix any variable that I was changing.

Writing in assembler is no excuse for using one letter names. I taught assembler, and anyone doing that would lose points.

Yes indeed. The compiler in question didn’t compile itself on Multics when I started. They had a back end which translated Pascal into PL/I, which was the Multics native language - and what most of the OS was implemented in.

And as for the bottom turtle, in high school I used a computer without an assembler, just a simple translator from decimal to hex. I wrote one in machine language, since I got tired of branching to absolute addresses and having to go and change them whenever I added code. The alternative was to insert a jump to an empty area, insert the code, and jump back. That was real spaghetti code!

I’ll disagree with this. If we accpet, forthe purpose of discussion, the traditional division of design into high-level design (divide the program into modules) and low-level design (design the data structures and algorithms for a single module), then stepwise refinement primarily addresses low-level design. (Wirth’s paper illustrated the techniaue with the 8-queens problem.) It works very well for low-level design. There may not be anything better.

But the extension of stepwise refinement to top-down design, as a method for high-level design, is far less successful. It just does not scale up well. There’s decades of subsequent advances, starting with information hiding, then Abstract Data Types, then Objects (and, maybe, aspects, though I’m not as sold on that) that lead to better decomposition of large programs. By focusing attention on procedural steps, Top-Down design leads to a program decomposed into procedures. Only by standing outside that process can a designer recognize the opportunities for useful ADTS and/or objects. I think that the consensus of the profession would be that a high-level design approach that emphasizes, from the very beginning, breaking the program into ADTs and/or Objects will work far better for large and complex systems.

Then I will argue further that the experience with OO analysis and design has offered credible evidence that a good way to lead into an OO or ADT-rich design is to start with an OO analysis. Note that the OP was explicitly discussing this transition from analysis to design. I think he/she is already dealing at a more sophisticated level than Wirth’s stepwise refinement.

Wirth was a leader. He may have been a genius. But the field has not stood entirely in the last 35+ years since he wrote that paper. And the systems we are developing today are far larger and more complex than what Wirth ever dealt with.

You can do very complex things in assembler, far more complex than implement a simple language like the original Fortran, Algol, or BCPL. For my MS thesis I implemented a lexical analyzer in microcode.

There have also been compiler compilers for quite some time. I used one for my compiler class in 1974, and it was far from new then. It was less general than lex and yacc, and ran on a 370 mainframe.

Probably the most important part of software design is designing it so that it will be easy to change later. So it’s good that you’re thinking about that. But if you’re that nervous about rewrites, it means you don’t know how to design loosely-coupled, modular software. Being a clever coder doesn’t mean you’re a good designer.

Go grab a book called “Head First Design Patterns” and go through the entire thing. Realize that design patterns are just a particular type of design philosophy, and do not take them as the gospel truth, but it will wake you up to some of the ideas involved in designing vs. just coding. Then go read the criticisms that say design patterns are a bunch of crap, and go read conflicting sources until your head explodes and you decide to go into aquarium servicing.

The most important thing I think I can tell you is don’t worry about it. The easiest way to overcome stasis is simply to impose a deadline. When the deadline is reached hand the product over to someone else to QA it. Works for anything - computer programs, essays, books whatever. You just have to plan how much you can do by the due date.

I have worked in major systems development projects on and off for the last 20 odd years and can tell you that there are two really good kinds of programmers to have on a development team. There is the “cowboy” who produces workable but buggy code at 5 times the rate of the average drone and there is the perfectionist who is slower then the average drone but produces almost error free code. On any project I would rather have one of either or both of these types than 10 more drones.