Is there (was there ever?) serious programming being done in machine code?

I first learned assembly-language programming in a class on “programming techniques and data structures” that was conducted entirely in assembly code on a CDC-6400.

This included lessons on arrays, stacks, lists of various types, trees, and techniques for manipulating all those. The class also included instruction on writing recursive subroutines and co-routines, which require the programmer to build and manipulate his own stacks, local storage, return addresses, and so forth. (The machine put subroutine call return addresses in a specific place; the recursive subroutine had to begin by allocating stack space for local variables and moving the return address to that stack. Returning from a subroutine took several steps too.) And we programmers had to explicitly code every detail of that.

I’ve often wondered if modern high-level programming students really learn what-all really has to happen behind the scenes for all of this to work. I think a lot of modern students just take all this for granted. A function calls itself, and it just somehow works!

They don’t know. But not many need to know either. Only a few years ago had to explain localization of data in multi-threaded processes to programmers. But they were working at an application level and had just been provided a high level function to start new threads with no other information on what that entailed. To them it looked no different than existing functions which created new processes. The point of high level languages is to remove these concerns from the programmers and so very few of them ever need to look at the underlying mechanics.

FORTRAN was intended to be so simple that you would know exactly how much memory code required after compilation. There was no consideration for a block of memory of unknown size to be allocated for a stack.

And, to be sure, FORTRAN had no mechanism for allocating memory at run-time, neither automatically for local variables, nor explicitly by the programmer via functions like malloc() or similar.

I first learned FORTRAN-II using an IBM-1620. This machine had byte-addressable memory (sort of) that came in increments of 10K (or 20K?) bytes from a minimum of 20K to a maximum of 100K bytes. Those were the days when keeping your code and memory usage small were parts of the art of programming.

I forgot to mention the declaration issue. There’s no keyword for it, so it doesn’t exist in standard FORTRAN.

Although I never used that version, it was a FORTRAN II book that introduced me to computer programming. It all seemed easy, except I had no idea why it was important to read a bunch of punched cards, move the data in the columns around to other columns, then punch a new set of cards with the same data in a different place. A little more context for their examples would have helped me a lot.

Sounds like that author was teaching how to write COBOL programs in FORTRAN. Remember,

Many many years later, I took a COBOL class. The textbook was clearly, plainly, obviously, written by a die-hard Pascal programmer, using Pascal coding styles and techniques that no Real COBOL Programmer would use. Besides, the book was just plain dumbed-down, as were most college textbooks of the era. I went to the library at the Big U and found an ancient COBOL textbook that was four times the size and taught the FULL language, and that’s what I studied that semester. (I did similarly with a USHistory class too.)

Stupid dumbed-down textbooks.

ETA: Cite: Ed Post, “Real Programmers Don’t Use Pascal”, circa 1983.

REQUIRED reading for all you young-uns in this business.

They were good examples for learning FORTRAN, just not for learning anything else about IT. That’s what I needed at such a young age, but I was left with the mystery of ‘why’ for years after that.

And of course, the venerable tome Numerical Recipes originally contained FORTRAN programs. Then they adapted it to Numerical Recipes in C, that contained FORTRAN programs written in C, complete with home-rolled functions to create unit-offset arrays and pass-by-reference all over the place.

Heh. The hardest part of getting the Jensen and Wirth Pascal compiler to compile itself on a Multics machine was their naive belief that all computers had 60 bit words, and sets could be implemented assuming this.
But let’s remember that a big reason for Pascal was to teach programming during the structured programming revolution. The first assignment we gave out for our PDP-11 Assembler class was a short program in Pascal, which we graded the hell out of to make them break their bad programming habits. (This was 1974, they had plenty of bad habits.) If they didn’t use structured programming techniques to write in assembler, they’d never get anything to work. In any case I had the job of teaching them Pascal in two lecture periods. It was not hard to do, and it seemed to work.

NOW you’ve done it! You got me started on what a miserable Pit-worthy excuse of a programming language abomination Pascal is, and always was!

We begin with an anecdote, and maybe that will be it for this post (in which case, more to come):

I first heard of Pascal circa 1970 at Berkeley. A certain friend, who didn’t have a car, invited me to go with him (IOW, invited me to drive him) to visit the gliderport in Fremont. On the way, he gushed extravagantly about this brand-fangled-new programming language he was learning: Pascal. He told me all the wonderful things Pascal had: Block structure! If/then/else blocks! Strict typing (at which point I’m beginning to get skeptical)!

And his favorite new thing: CASE statements! Woo-hoo! He told me all about them.

Having coded the equivalent, the drawn-out way in FORTRAN, umpteens of times, I immediately understood the idea, and how cool an idea it was. So of course, my very first question was . . . wait for it . . .

 

Do these marvelous new-fangled CASE statements include a “none-of-the-above” clause?

So he thought about that for a while, and then admitted he didn’t think so. Then he thought about it a bit longer, and suggested how to accomplish the same: Simply set a boolean variable to false before entering the CASE statement. Then, in each case, set that to true. Then at the end, test that, and if it’s still false, then none of the cases happened.

(It turns out, though I didn’t know it until much later, that even that doesn’t work. This circumstance is explicitly undefined in the Pascal Report, and in fact some (or all?) implementations make this a fatal error.)

My immediate reaction was: What kind of a brain-dead waste of BNF is a language like that? Fuck that shit.

In the next several years, I learned bits and pieces more about Pascal, and every detail I learned just entrenched more deeply my initial impression of this turkey of a programming language.

I accumulated quite a catalog of defects in the language. But, Great Minds Think Alike, and I eventually discovered that none other than Brian Kernighan had accumulated largely the same catalog of defects as I had, which he actually published.

Read it and weep, all you Pascal cult-heads!

Brian Kernighan: Why Pascal is Not My Favorite Programming Language. ATT Bell Labs, 1981. (PDF)

Another REQUIRED reading for all you newer-bies in this field.

This fact, in addition to the total waste Wirth made of the CASE statement, have often led me to wonder if Wirth has actually ever written a large (or even medium) computer program. Note my immediate observation that a CASE statement without an ELSE clause is useless – which I immediately knew from extensive FORTRAN programming. And somehow Wirth missed this?

Besides the implicit limitation of “sets” to 60 elements because he happened to implement it on a CDC-6600, there were other anomalies having to do with the interaction of the language with the operating system, which Wirth apparently also didn’t understand.

Kernighan discusses in some detail the disaster of Pascal making array sizes a part of the array’s type, including character strings which were just an array of char. Here’s another detail he didn’t mention:

The CDC operating system read cards from the card reader and stored them in an input queue for later batch execution. These files were slightly compressed: All trailing blank spaces on each line were discarded. All programs, including compilers, that read data from input files have to be able to deal with variable-length input lines. Pascal simply could NOT do this – neither the compiler at compile-time nor users’ programs at run-time.

Now it happens that FORTRAN conventions reserve columns 73-80 (or 73-90 in CDC systems) for a “comment”, generally intended to put sequence numbers there, as the major source-management program of the day did.

So students learned to put a character (usually a period) into column 80 of [i]every[i] source card and input data card. This made all lines have a fixed length of 80 characters with no stripping of trailing blanks, which made the compiler happy.

The computer center sold boxes of IBM cards with column 80 pre-punched like this, just for the purpose.

You missed a biggie - the absence of a loop exit statement in Pascal. Far more of a problem than no case else. In fact one of the questions on my PhD orals was designed to trap structured programming fanatics into admitting that sometimes a goto in a language without loop exits make sense. I had learned this long before, so I didn’t get trapped.
I don’t know if you ever read Wirth’s data structures book (I taught data structures from it) but all variable names in it are one or two characters long. Not that much of a problem for those small examples. But that was how he programmed. All variable names in the Pascal compiler were just a couple of characters long. It took me two months to go through it and document them all so I knew what they did when I modified the code.
I never used it after I got out of school, but I went to AT&T so you could guess what language I used in 1980. As Kernighan said, Pascal is a good teaching language.
Not that C is perfect. At one point I wanted to compute a call to one of an array of functions. (Or something like that.) I had no idea of the syntax to use, so I looked up several possibilities in the BNF. They were all legal. I finally looked at the source code of an internal emacs which did this and found out something that worked.

Wirth’s interest was in language design, so in context that means 'teaching how to design computer languages". So, for all it’s fault, PASCAL was a well-designed language. In contrast of course, to that other language which ‘just growed’. Like Topsy.

My first couple of years at university used Pascal as a teaching language, but when I actually decided to do a Computer Science degree a couple of years later, teaching had shifted to Wirth’s successor to Pascal - Modula-2. Just reviewing the language, I find it has both an ELSE statement for the CASE construct, and an EXIT for breaking out of loops.

The drive at my university for Modula-2 was driven by a senior lecturer who was on the committee for the Modula-2 standard library, and who wrote a teaching book for the language. I eventually married his daughter.

In the many, many years since then where I have written code in a very large number of languages (from assembly to C/C++ to my current hacking in Python) for a variety of reasons, since I left university I have never written anything else in Modula-2, and only had one ill-advised flirtation with Turbo-Pascal that I regretted almost instantly (to be fair, I was porting a 3D polygon library originally written in FORTRAN). But I still write basically Pascal-like code, and find using any mechanism that breaks out of a loop without completing a pass deeply uncomfortable.

Similar here. I got taught CS in Pascal from Wirth’s book. We did the book cover to cover, including compiler construction. Later I did a lot of work in Modula-2. On the way did a lot in VAX Pascal. That was a very different language to base Pascal. It had all the proper additions as well, and provided full systems access, there were bindings for all the system calls and libraries, bit twiddling capability and so on. I wrote some serious stuff in that. Wrote my PhD code in C, never turning a line in C again. Sadly C++ is hard to avoid. A language that should have been strangled at birth.
There is no one true language. They all suck in their own way.
The subject of teaching language is always fraught. My uni went Fortran, Pascal, Ada, Java, C++. The last of these I had nothing to do with, and I think they had no idea what they were doing. I had left the university by then. I have no idea what they teach now. If I were teaching, I would pick C and Python, and teach them side by side.

In 1957 both the computer and software were still being defined.
Fortran was developed on the IBM701 and introduced on the IBM704. The 701 did not even have a jump to sub routine instruction. The need to support reentrant code lay far in the future. The sub routine jump, as introduced on the 704, was called a ‘trapped transfer’.

So, relevant to the OP, high level language and commercial computers were introduced simultaneously. The assembler was not an evolutionary step in the application of commercial computers. It was a software development tool but, in 1957, I don’t believe we even had an assembler
to support the 704 and 705 at Lockheed and JPL. It was all Fortran on the 704 and some Cobol/Algol on the 705.

The assembler came into widespread use with the introduction of microprocessors in the early 70s. Suddenly cheap computing was widely available to applications engineers. But, memory was minimal and software support was limited. The teletype and paper tape were the primary data entry tools. An editor and assembler were all you had.

Today MicroPython does a very good job of directly addressing the resources of the CPU. Microprocessors and the high level language have both evolved to the point where the programmer/engineer can have direct control of the hardware. MicroPython has instructions to initialize the special function registers that configure the microprocessor hardware. Instantiation of the UART configures the UART and sets all of the
registers for the desired baud rate. The same goes for the timers. It’s a lot easier and less error prone than assembly and makes more sense than shoehorning the operation into a ‘C’ structure.

So, perhaps Python or MicroPython has brought us full circle where a HLL provides convenient direct access to the hardware. Just like an assembler.

As it happens, I was working with this last week:

function AuxInChar(Port: word): char;
inline (
$5A/
$B4/$02/
$CD/$14);

It’s part of a test machine ~ 20 years old, but that code is clearly 1980’s.

(WTF? Discord has a Preformatted text option that puts everything on one line?)

Holy crap…I was probably 2 years behind you. May have bumped into you in the basement of Evans Hall.

My first adviser wrote what I believe was the first ever assembler for the IAS machine while he was a student of von Neumann’s in Princeton. Von Neumann was one of the few who had appointments at both Princeton and IAS. However I’ve found references giving the date as 1949, so I’m probably wrong about his being the first. Nonetheless, they were very common by 1957.

I was going to mention Modula-2 as the successor with some problems in Pascal fixed. When Wirth came to our department for a lecture he was pushing it.

By the way, he started the lecture by saying his name could be pronounced two ways - Virth, call by name, Worth, call by value.

Surely he said NICK-laus Virt, call by name, Nickles Worth call by value.