Coding/Programming nightmares. Those forced on you and the ones you left for others

When I was in the Air Force, I transferred to a new base and worked with new people. There was a civilian worker who had been in the office since, apparently, God was a corporal, there was a rumor that he had a printout of every version of every program in the system in his house. He was absolutely furious one day when he discovered that I had deleted a huge amount of code from one program because it was a fossil that couldn’t be reached in any possible iteration of the program. He demanded I put the code back, and branch around it. I refused.
There was another place where we were coding in FORTRAN. One of my co-workers wrote every line of his code with no spaces, because the FORTRAN compiler could figure it out. Fortunately, I never had to decode his code.

A few years ago, I was working for a networking company supporting one of their products. I didn’t get to see the product code, but I was working some cases where one of the load distribution mechanisms on some configurations would apparently stick to a target, and not provide the expected results. I was digging into the internal statistics tables, when I saw a really, really big number in one of the columns attached to the stuck target.

64-bit integer big.
And it all became clear.

Some programmer declared a counter variable as longint, and incremented and decremented the counter as required. When they did so, they used signed (two’s-complement) integer operations. Unfortunately, due to a logical error, they also decremented once more than they should. They also never stopped to check whether the value was 0 (possibly assuming that unsigned integer operations don’t actually return negative results - they just underflow) - once it actually went negative, the signed integer representation makes the longint a really large unsigned number, as well as a negative signed one.

Once it was negative, it could still be incremented, but it would also be decremented multiple times. Because the selection code treated the value as a signed integer, the value was always the lowest value to select, so it became the stuck value that was always selected.

It was eventually fixed, but it demonstrates how careless type management (in C in particular) can cause odd issues. Plus if you don’t intend a value to ever be less than zero, don’t let it get less than zero …

I wrote plenty of interpreted code, including the interpreters. It was necessary in the early stages of PC development because of the high code density needed with the limited memory available and still in much more use than people realize because of that code density advantage.

I do the Java. Specifically, I teach it. I can tell you The Stories, but in the interests of being professional and not identifying myself OR my students, I need to keep mum.

But I can tell you that every year, I get at least one assignment with a COB: Column of Braces. You know somebody’s gotten to the end of the code, they’ve been writing sloppily without a care in the world for indentation, it’s not compiling and they just wind up mashing the close brace key until the SDK stops screaming at them.

Once, and only once (though it was C, not Java) and more than a few years ago, I was treated to a comment: “adding } until it compiles lol”. And 20 braces in a neat column. Maybe 30? In my mind, it was way too many, a towering pillar of casual indifference.

(It did compile though.)

It was my first job out of university. I was in charge of writing a device driver for a reel-to-reel videotape device, on that newfangled Windows 3.0 platform. The boss showed me his existing driver for another device, and said I just needed to do the same thing for the VTR.

I was fresh out of school, and itching to put my vast knowledge of object-oriented programming to good use. But we were using Microsoft C at the time, as Petzold intended. The only C++ compiler we had was from Borland, and the boss said we couldn’t use it for production code.

So, of course, I decided to just write the code as I would write it in C++, but using C and a bunch of nested structs, macros and writing conventions to implement encapsulation, inheritance and virtual functions. Class hierarchies were very much a thing, all the books agreed that object orientation was all about inheritance.

I came up with the brilliant strategy of numbering my classes with numbers that reflected their position in the class hierarchy. So a method that would have been MitsuiTapeDevice.SendPlayCommand() in C++ became something like C4120_SendPlayCommand(), whose first parameter was a pointer to a struct called C4120_MitsuiTapeDevice (which included the C4100_TapeDevice struct as its first member, and so on), and I defined a macro called v4000_SendPlayCommand() which got defined differently in the various header files to give the equivalent of virtual functions.

It was a horrible mess, but it worked like a charm – as long as I was the one maintaining it. We bundled it with our product and I expanded it over the next couple of years to cover various devices using similar communications protocols.

When device support got moved to a different team, the first thing they did was tell me I was mad; the second thing they did was start a project to rewrite that monster. Which wasn’t that hard, really.

I inherited some clean almost-finished python code. Of course, it wasn’t until I hooked it up that I found it didn’t work. The hardware platform generated data errors. We didn’t see the data errors until the software was working well enough to start production tests, but that’s not a problem: we just have to communicate the low level exceptions to the top level handlers, and there is a built-in exception handling process to do that.

This was a meta-programming shell to synchronously handle asynchronous parallel processes. To get the integration to work, we have to add exception handling to the meta functions of the mid-level objects.

– at which point I realized that Python has no “on error resume next” feature , and that the whole structure could only be made to work by wrapping every line in a separate exception handler, all the way up the functional structure, so that exceptions for the parallel asynchronous processes could be caught, labeled, stored handled and raised, handled distinctly (not collectively) at every level.

Absent the low-level asynchronous errors, it all worked, and was a lovely example of simplification using functional programming, that was clear and understandable at every level, hiding every feature in objects clearly visible, and only visible, at the relevant level, extendable in the simplest most familiar way, and scriptable in the most natural fashion: lines like “A=B+C” just work.

Present the low-level asynchronous errors and the fundamental design of Python, there was no reasonable prospect of using any of it.

The worst I’ve ever had to deal with was a legacy program for simulating stellar interiors, that had been previously written and maintained by about five other programmers. None of them signed their work, but I could quite easily recognize that many different styles.

First of all, I was working on this in around 2010, and it was all written in Fortran77.

Second, large parts of the codebase existed only in the form of OCRs of printouts of previous versions of the code. With lots of variable names that differed only by a 1 vs an I, or a 0 vs an O.

Third, any given variable could refer to a physical quantity, or to a base 10 logarithm of a physical quantity, or to a natural logarithm of a physical quantity, with no indication given for which were which. Likewise, there were a variety of units which could be used, and no indication of those, either.

This being Fortran, it was common for functions to work by modifying the values of the variables passed to them. There was never any indication of which passed variables were for input, and which for output.

Three of the five anonymous coders, of course, included no comments at all. One had cryptic comments, and one had comments that were actually useful (I was madly in love with whoever that last person was).

And the person in charge of the project judged any version of the program based on whether the results were correct. Of course, she knew what was correct, because it was what agreed with previous versions of the code. And so, for example, the random noise in one of the graphs it produced was absolutely, definitely due to “unavoidable quantum effects”, and it’s pure coincidence that when I increased the precision of the calculations, the internal overflows/underflows disappeared, and the graph became smooth. Which was obviously wrong.

I hope that I at least left it less of a mess than I found it. I started by converting all of the source to rich text, and color-coded all (well, a lot-- I never finished, because some of it seemed to be inconsistent) of the variables by whether they were linear, log10, or natural logs. and indicating input and output with sub and superscripts. I think that made it at least a little easier to read.

As for messes I’ve left behind, I’ve only rarely written anything for use by anyone else, and when I do, I try to be more conscientious. But for my own use, I do all sorts of crazy things. Possibly the worst: Some of you may remember me in Mafia games, using a program of my own devising to analyze voting records. To do that, I had a standard format in which I’d write up the game state, then a program that would read in state in that standard format and generate output from it, then the output from that was inserted into another program as source code, and that was compiled. At least my code-generating program also generated corresponding comments.

A while back, I managed a Birding Festival (one of the US top 3). After a couple of years of doing everything by hand, I set up a data base for all of the events and attendees. Also made an Excel spreadsheet for the schedule and a Visual Basic operator interface. A programmer friend taught me enough SQL to to do specific data retrieval. It was a kluge, but it worked. It helped that I had a crusty volunteer Registrar.

The end result was that the Board of Directors could design the layout of the festival by just marking up the spread sheet. The resulting file was fed into the program along with all of the attendee data. The computer then printed out the catalog of events, response letters for attendees, event tickets, attendee badges with their event schedule printed on the back and statistics for the city council. It really was pretty neat.

So, at the annual meeting the President of the sponsoring organization announced that we had a new source of income - we were going to market our registration software. GASP! I went to the board and tried to explain stuff we didn’t know like disclaimers and distribution licensing requirements. And stuff we did know like bugs and interoperability. Oh yeah, and a manual, Our manual was the Registrar. It wasn’t a high tech group. They were more concerned about the potential price than the fact that we didn’t even have a box to put it in. Fortunately, they were not known for completing projects.

Eventually I moved on. Congress began de-funding the Wildlife Refuges which eliminated the resident volunteer program and with it the crusty Registrar.

I worked a similar nightmare back in the late '70s that did finite element analysis. It had been written in FORTRAN 4 for CDC machines by a group of Structural Engineering professors. Who then passed it to other universities who tweaked it and passed it and … eventually it fell in my lap at my Uni. By which time it was at least 10 years old.

It was very memory hungry for the era and had groups of massive multi-dimensional arrays that were EQUIVALENCEd (think union in C for the non-FORTRANners here) to other groups of dissimilar multi-dimensional arrays. Most functions’ calling sigs included 8-10 parameters. Except for the ones that had zero parameters & simply partied on the gigantic COMMON (C global) EQUIVALENCed array dust cloud.

My job: Convert this from CDC FORTRAN to IBM 370 FORTRAN. The CDC machine had 36-bit integers and 36-bit floats. IBM was 32-bit ints and 64-bit floats. Of course the various EQUIVALENCE areas in CDC freely mixed float arrays with int arrays.

Oh yeah, did I mention that we received the source code as 8 boxes of punch cards with substantially zero comments? At least we did not have OCR’ed printouts. Gaah!!

In the early 90’s I inherited a handful of C programs written by a guy who hated C but loved PASCAL, so he created his own file, “name_of_jerk_omitted”.h which he included in every one of his programs that pretty much did character swaps so that his C code looked liked PASCAL.
Stuff like
#define BEGIN {
#define END }
He also named his procedures and functions using Pascal casing.

After he quit, we officially forbade any type of language pseudo-conversion in our coding standards.

And of course, XKCD is relevant again.

Because that was before the internet, he probably did create his own file. But it wasn’t because he was a jerk. That was actually standard c, as discussed in the early c literature, by well-known and respected c figures. It was one of the features of c – that it could be customized by the macro preprocessor to be Pascal-like, the same way that c++ was first implemented in c with the macro preprocessor.

Well, even if it is a perfectly acceptable way to use the language, you gotta admit, that’s pretty evil behavior in an environment where not everyone else had necessarily used Pascal. I had a co-worker who used a Dvorak keyboard map, and would set that map on a console attached to the machine and not revert it. Perfectly valid way to behave, except for not returning the system to a place his co-workers would expect (and pretty much need) the system to be.

After having to deal with that nonsense one too many times in addition to the other crap he had left un-fixed or intentionally broken when he changed the keyboard map, I eventually threatened him with breaking his fingers if he didn’t set it back after he was done working (yeah, that me would be an HR nightmare today). Of course, that guy left a minefield of coding and system nightmares when he finally quit.

I mean, work the way you want to work, but you have a responsibility to adhere to some sort of standard that everyone else can follow. If it’s not your own pet project where they’re hiring Pascal coders, you’re a jerk. I mean, I think I’m a jerk for not following a coding standard. At least I didn’t switch languages on ya.

I was a programmer for the Air Force back in the COBOL days. I worked at Gunter Air Force Station in Montgomery, Alabama, which at the time, was a major computing center for the Air Force. There was a civilian employee who had worked in the office for years. There were rumors that he had a copy of every version of every one of the multiple programs we were responsible for, printed out and stored in his garage.
One time I was working on updates to one of the programs. I discovered that half of the code was branched around and could not be reached under any circumstances. So I deleted it. And this civilian went psycho. I had to get the colonel who ran the office to intercede and tell the civilian to knock it off. He hated me for the rest of my time working there.
I also had programmed at Air Force HQ in Ramstein Air Base, Germany, coding in FORTRAN. I had a friend in the office who decided that, because of the quirks of FORTRAN, he was able to write all of his code with no spaces. It was impossible to read. Fortunately, I wasn’t responsible for updating anything, but I could only imagine what the next person to have to work on that code would do.

After FORTRAN, it was by introduction to COBOL that I learned that you could insert spaces into FORTRAN. Because of the ‘random quirks’, the “fixed format” of FORTRAN wasn’t really fixed: since whitespaceisignoredyoucan

put

     extra 

white space

in, to make your program clearer.

My memory’s a little vague, but I named the procedures and index in such a way I produced a COBOL line like this:
PERFORM MURDER BY STABBING UNTIL DEATH

I’m pretty sure if everything’s defined correctly that is an executable COBOL statement. I did it as a joke, so I assume it got changed later by someone more mature.

Disclaimer, I haven’t used COBOL since there was an 8-track in my car. So my memory may have this wrong.

You are not alone:
https://sites.google.com/site/johngsworks/home/poetry-in-cobol

Ooooh. That’s amazing!

Oh, that reminds me of a couple of pranks I pulled on my high school programming teacher. We were using GW Basic, where you’d LOAD a program file, and could then add lines by typing something starting with a line number, or LIST the program in order, or RUN it. Well, I was guessing that the teacher was grading our assignments by first running them, to make sure they did what they were supposed to do, and then listing them, to see how we did it. So I wrote a program that did whatever it was, but then, instead of terminating, it generated what looked like the standard command prompt. If the user then entered the RUN command again, it’s loop back to the beginning, and if they entered the LIST command, I programmed it to output some completely nonsensical fake “source code” (after which, it THEN actually quit for real).

For another project, one of my classmates tried to put up a “subliminal message” that only lasted a frame or two, saying “This project deserves an A”, or somesuch. So in my program, I included a fake subliminal message. When you listed the program, it looked, from the source code, like it would display a message, but the command that did it wasn’t really a command, but a word-wrap of the previous long line (a comment).

Speaking of Pascal, 3 months back, my dissertation involved creating and implementing a Pascal-based language. I decided to do it by modifying Jensen and Wirth’s portable Pascal compiler (see, we had open source in the late '70s too) which first involved getting it to compile itself on our Multics system. Before Pascal code was translated to PL/1.
I found that they were addicted to one or two letter variable names. I spent two months documenting every variable in the compiler. Second, I found that the implementation of sets was machine independent so long as you were using a 60 bit machine - which they did.
I should have known. I taught data structures using Wirth’s book, and every variable in the code examples had a one letter name.
Yeah, memory was more expensive back then - but not that more expensive.