Is there (was there ever?) serious programming being done in machine code?

The computer processor never reads human-readable instructions, though. It “directly” interprets some kind of digital code (e.g., the punched cards of the Analytical Engine). Assembly language is a high(er) level language for humans. (And, as has been demonstrated, mnemonics such as MOV and NOP may correspond to more than one opcode.)

The “processor instruction set” seems a bit more abstract than “machine code”, since if I first decide I want my processor to be able to divide, add, fused-add-and-multiply, etc., those will be included in the instruction set, and then I need to work out a proper encoding for all of those instructions, taking into account the microarchitecture. It seems you could have different machine codes and different implementations of the same instruction set.

In a word: Yes. The hex was absolutely machine code. Raw bytes eaten by the CPU to make it do your bidding.

I’m not sure I follow you completely. I thought I made clear that I understood that the CPU is instructed with digital values (indeed my anecdote had as point that I only entered hex values in the computer), but apparently that wasn’t obvious.

I find it hard to follow the distinction between processor instruction set and machine code. You say that you “need to work out a proper encoding for all those instructions”. That sounds like programming subroutines, and subroutines are not part of the processor instruction set. The processor instruction set, to my understanding, is identical to the complete set of op codes that have been established by the manufacturer and cannot be modified. Maybe here is the source of confusion: it appears as if we work from different perspectives.

From the perspective of the programmer confronted with a ready CPU, the CPU is programmed with instructions that can be written down as op codes but which exactly correspond to hex values. At the time (40 years ago) assembly was practically equivalent to op codes, ,but arguably assembly would be the input of an assembler and would allow for variables and other constructs. NOP was for the Z80 as I recall a specific op code, so not at all something that could vary. Indeed it seems to serve no practical purpose in such an environment to use one mnemonic to refer to different opcodes. You really wanted to know when you used NOP or used a different opcode that didn’t change the CPU state (such as move content from a register to itself), as that might take a different number of clock cycles.

It is possible that afterwards assemblers have started to take more liberties with the op code model and embracing higher level constructs. In my day and age those would simply be called macros, i.e. lists of op codes that would simply be substituted) but those would typically be user defined. Maybe the confusion derives from persons thinking in terms of contemporary assemblers versus persons remembering assemblers from the past?

Another possibility is that there is confusion at what is meant with ‘programming’: is it entering the actual instructions for the computer (which may be done by flipping switches or writing input for an assembler or compiler), or is it writing the program (which may be done on paper or on a computer in assemnbler or high level language)?If you take the last interpretation, it is by definition impossible to program in machine code/

I think that depends on the processor probably. The human readable instruction set represents the design intent of the processor’s operation.

Of course that needs to be expressed ultimately in binary form so that the processor can actually work on it, and I know there are cases where undocumented machine instructions can be inferred as a purely binary addition of two known and documented instructions.

But if we’re to say the processor instruction set is not machine code, it sort of renders the OP’s question nonsensical - it becomes like asking “Does anyone write books just using alphabets, not words?”

An instruction set for a particular type of CPU is obviously a set of instructions, each instruction usually expressed in human readable form, and each one based on a corresponding function of a CPU, which is activated in the CPU by a number retrieved from memory, i.e., machine code… Assembly language is a superset of the human readable instructions and other programming language constructs. The conversion between human readable instruction names and their corresponding machine code numbers is the most minimal and simple part of what an assembler does. And not even a necessary function of the assembler itself. As I mentioned previously, assemblers produce object code, not machine code. It is simply practical to have the assembler convert the static portion of an instruction to it’s machine code form.

The confusion might lie in that there’s a level lower than the “machine code”. At some level, the computer can’t do division, or multiplication, or even addition: All it can do is binary operations like NOT or AND. Anything useful like arithmetic needs to be implemented in some way from those, and the combination of logic gates to implement division is a lot more complicated than the one to implement addition.

(And of course there are levels even lower than that: Each logic gate must consist of multiple transistors, and then below that there’s the semiconductor level, and below that quantum mechanics. But at some point, you just have to say “no more”, and only worry about your own level.)

Agreed. The point that seems to be in contention throughout this thread, is whether it’s legitimate to refer to the instructions (in name form) as ‘machine code’, or whether that term only legitimately describes the numeric version of the instruction.
If the latter, then the OP’s question is rendered impotent - not very many people, if any, will have ever directly written numeric code without also considering the instruction meaning of those codes.

FORTH has to be in my top 3 things I learned about that blew me away about how clever humans can be. Another was the derivation of special relativity, but I have to admit I never got that one down to the point I could do it from scratch without notes. That’s how impressive I found it.

There’s an enlightenment moment that happens when a FORTH programmer learns that the mysterious complex “Inner Interpreter” is actually a single RTN instruction in machine code.

Having read about “Design Patterns” from the Gang of Four, and the whole revolution of Object Oriented Programming, I have to appreciate that too, and sadly FORTH and OOP aren’t really all that in sync.

But I confess to a daydream that I win a billion dollars, and pay to have a CPU or microcontroller developed that is native FORTH and lets people develop computers without any software other than what they write…

That is just an aspect of the threaded interpreter. The idea preceded FORTH and is neither required or specific to a FORTH implementation. It is a core part of the arsenal of tools used by low level implementers.

In 1957 I was a Customer’s Engineer on the IBM 704s at Lockheed Burbank and JPL Pasadena. Every 704 had a maintenance crew and a resident operator. The 704 had a large control panel with lever switches for each bit in the 32 bit accumulator and a switch for each bit in the address register. All of our simple diagnostics were entered in octal code through the front panel. The Operator had object code listings of any program that was running on the system. If the computer looped or stopped, the operator would intervene and replace a dropped bit in memory or enter a patch to get around the loop. Outside of canned diagnostics, everything we did was in octal object code. Our attitude was that real programming was done in octal. Cobol and Fortran were for accountants and scientists.

Later in the early 70s I gave seminars to early adapters who were just beginning to replace springs, levers and gears with software. National Semiconductor had just introduced a conversational assembler. That’s an editor/assembler combined. A major innovation when you had to load utilities with punched paper tape through the reader on a Teletype terminal. Some in the audience aggressively challenged the need for the assembler because it was easier to debug code when you could read it directly in hex. They claimed A40E is as easy to remember as Load Accumulator Indirect through an address developed by adding 0E to register 4. By learning the instruction set in hex they could write and debug code rapidly. They did not get bogged down in syntax errors or misspelled labels.

I made the arguments for relocatable code, software maintenance and readability. The IMP16 had a sophisticated architecture for it’s time and produced complex code listings. The hecklers produced listings that supported their claims of creating up to 150 lines of bug free code a day. I was surprised that it was even an issue, let alone something anyone would defend.

So, was there a time when hex, object, machine code was seriously used. Yes because it was a dramatically different environment. Early adapters of microprocessors were engineers solving real time control problems. Most of their time was spent in the timing of the I/O interface. The software was produced by an individual not a group. Also the development tools were crude (perhaps archaic). So, for a brief period of time some serious programming was done in object, hex, machine language. Until the advent of the floppy disk and CRT terminals machine code had advantages.

I still prefer assembly for real time microprocessor programming.

The first interesting virus was written in Assembly then manually compiled into machine code without an assembler. I have the original notes and source code.

It’s one in the same. Basically your assembly statements are just a short label for a machine instruction that make it easier to read in an assembler.

From what I understand, the only things written in assembly language are typically very low-level code that needs to execute as fast as possible, so it’s hand-optimized in assembly. Stuff like CPU microcode, device drivers, etc…

Microcode was tricky and errors were costly. IMP16 had a microcode assembler and a development system that executed the microcoded instruction sets.

sbright33
The first interesting virus was written in Assembly then manually compiled into machine code without an assembler. I have the original notes and source code.

Yeah, someplace I have Cookie Monster for abuse of an OS9 (not Mac) network. In an introductory book. How they trusted beginners thirty years ago!

Don’t know exactly what your microcontroller requirements are, but this page lists several space instrumentation that used a rad-hardened RTX 2000 Forth microprocessor.

If you are referring to assembly as machine language and if “serious programming” means CPUs shipped, then in the 1980s most serious programming was done in assembly language. Most, if not all, appliances, games, Christmas ornaments, toys, disk drive controllers, medical and automotive systems were programmed on assemblers. The major suppliers were Okidata, National Semiconductor and Motorola. The minimum order was 100,000 units. Outfits like Mattel, Sullivan and Milton Bradley would take 250,000 of a single code (these were ROM programmed single chip computers) and sometimes had half a dozen products running.

In terms of lines of code being executed daily, assembly language generated code was probably dominate through the mid 1980s. It was very serious.

In terms of instructions executed you are certainly correct due to micro-coded processors. In terms of lines of code written though, while the majority of systems software was written in assembler into the 1980s, the vast majority of application code was written in higher level languages and far exceeded the amount of system software written. Without micro-coded processors the number of instructions executed was still similar for systems and application code because the application code was using the OS and it’s own libraries so often. This situation was changing all along the way as high level languages took over the development of system software because of a variety of factors.

Agreed. Single chip controllers were the dominant product but did not occupy the majority of programmers. But, the question posed by the OP addressed “serious programming”. In terms of $$ or breadth of application the single chip processors dominated the 1980s. Ordnance like the FU139 bomb fuze was definitely serious compared to spread sheets or data bases. In the 80s ‘C’ had not yet become dominant. Large scale software was not what it is today. The OP asks “was there a time?” and the answer is yes, there was a time and it began to fade in the mid 80s.

Regarding ‘microcode’, single chip processors were ROM programmed but were not micro-coded by the end user. Microcode resides in the dedicated memory that interprets the instruction set. A programmer does not have access to the microcode. Only the chip designer can change it.

Bit slice machines like IMP and the 2900 could be micro-coded allowing the customer to create a proprietary instruction set. One customer microcoded IMP to provide a 100 bit wide data word. A couple of security outfits programmed them to execute the code of foreign computers. Computer Automation was one of the few that went into production. So, the single chips were ROM programmed for the end application, but the microcode was set by the chip designer and only used to define the instruction set.

Instruction set: The list of instructions the CPU accepts.

Assembly language: The language consisting of instructions from a specific CPU’s instruction set and the commands accepted by a specific assembler, including macros, ways to specify arguments to instructions, the ability to use labels, and the ability to control memory allocation and layout.

Machine code: Some numeric representation (in hexadecimal, octal, or, rarely, binary or decimal) of a region of memory the CPU is going to execute as one or more instructions. That’s it. Machine code is a memory dump in some format humans find possible to read, given that we can’t read charge patterns in DRAM or on disk drives.

Assembly language might include pseudoinstructions, such as these for MIPS, which look like instructions in the assembly language listing but have no direct correspondence in the machine code. MIPS assemblers must turn them into one or more different instructions which do get turned into machine code.

So do people write programs in machine code? Sure, but if they do, they are thinking of the equivalent instructions in the instruction set. I doubt very many people write a hex or binary value in a piece of machine code without ever thinking ‘this means jump’ or ‘this means increment register A’