My first job out of college, in 1976, was for Monroe Calculator. They had a “programmable calculator” that looked like a desktop adding machine, the size of a typewriter. It had 4k memory and the ability to run programs written in machine code (3 digit octal. 220/xxx was “jump to location xxx”, and 057 was “return”.) In that 4k we were able to write programs that determined the premium for auto insurance, based on your input parameters. The only input was the numeric keypad; the output was the machine’s paper tape.
(Octal is easily convertible to hex. All the bits turned on is a 377 in octal, FF in hex.)
The DEC PDP-8 minicomputer came with a standard 4K 12-bit words of core memory. In that 4K, DEC was able to shoehorn a remarkably full-featured language interpreter called FOCAL (similar to BASIC) with something like (??) maybe 1K or so left over for the text of the actual program. If nothing else, FOCAL was an absolute marvel of memory-efficient programming, such as we never see today where memory is so cheap that it’s wasted by the gigabyte. FOCAL was so memory-optimized that in some cases the execution sequence fell through a table of low-order values, effectively executing a series of harmless AND instructions (instruction code 0), in order to save one JMP instruction.
It was a predecessor to C. From BCPL came the B language by Ken Thompson and Dennis Ritchie, the logical next language being Dennis Ritchie’s C.
DEC had Bliss, which looked again like an assembler masquerading as a high level language. Most of the OS code, was in Bliss. The most quirk feature being that the dot “.” operator was the dereference operator, and all symbolic values were treated as addresses. So the contents of a variable were written as .name just writing name got you the address, not the contents. A peculiarly excellent way of introducing hard to see bugs.
C as the OS implementation language could be considered as the pivotal reason for Unix’s success. That plus access to a C compiler. Once you had a C compiler, you wrote a tiny bit of assembly code - usually little more than first level boot, then the process dispatcher and context switch code, plus the core exception handling almost everything else was ready. It was just those few privileged instructions needed to effect a context switch and switch into kernel mode that you didn’t have access to in C.
Device drivers could be written in C, and you had all the infrastructure. Virtual memory management was the next big step, and most of that was in C as well.
Sysgen wasn’t much easier on S/360 & S/370. Although it was a lot more a matter of module selection and linking plus burning in tables of constants than it was assembling core OS functionality from source code.
The HP3000 mini I dealt with extensively in the late 1970s was interesting. It had no real assembler; the lowest level language was called SPL/3000 (Systems Programming Language) which resembled C. Except where it didn’t.
The machine itself was a semi-RISC stack-oriented registerless machine and SPL compiled very closely to machine language. The OS (MPE/3000) was written in SPL. Like C there was provision for adding inline assembler where needed for the really hardware-oriented or high performant stuff. Very little of the OS, DBMS, or file system drivers were inline assembler; most of it was plain SPL.
For the time it had perfectly decent performance; the OS being written in a higher level language certainly wasn’t a performance bottleneck.
The Burroughs 5000 machines had no assembler, the lowest level code was written in ALGOL. The instruction set is said to have been designed for the implementation of high level languages.
Well it was a stack oriented ISA that naturally leant itself to Algol style languages. Much like the P-code virtual machine used to implement Pascal systems for a time. An interesting thing about the Burroughs machine was that it wasn’t possible to write machine code in ordinary operations. Only a privileged executable - the compiler - was able to anoint a generated bunch of bits as something that could be executed. Any other data could simply not be executed. A level of security by design we still have not recaptured in modern systems. But it underlines how, indeed, the machine was designed from the ground up around a compiled high level language.
The idea of high level language oriented ISA versus a simple one goes a long way back. The idea that it makes the compiler’s job easier made sense once. But that was a long time ago. The disaster that was the iAPX-432 probably still haunts Intel. Although the Itanic probably rates up there. There is no easy answer Nowadays there is so much silicon at the disposal of the designer that the tradeoffs in ISA design are really complex. We used to teach that complex ISA caused you grief in instruction decode, and that slowed you down. Now there is some reason to suggest that pressure on the instruction cache matters enough that a more complex and compact ISA might be more important, especially as processors can throw a lot of resources at decode. Compilers can also just not emit slow instructions, even if they are available. Intermediate virtual machine generation of code probably helps here a lot. There are still times when you need to go very deep to wring the last bit of performance out of code.
These CPUs are designed to provide direct hardware support for programs written in high level languages…
(8086_family_Users_Manual)
That’s the call and return semantics, the string and stack handling, memory access for structured records and so on.
Also, though it isn’t explicitly stated like that, they were written to provide Operating System support: segmentation and jmp instructions support relocation and re-entrance, the ‘software interrupt’ feature supports a relocatable, dynamic loading OS API, that can be ‘loaded from disk’.
Regarding some of the earlier discussion in this thread:
The instruction set can be viewed as existing at to levels: the assembly level and the machine level. To the assembly language programmer the 8086 and 80186 appear to have about 100 instructions. One MOV (move) instruction, for example, transfers a byte or a word from a register or a memory location or an immediate value to either a register or a memory location. The CPUs, however, recognize 28 different MOV machine instructions (…). The ASM-86 assembler translates the assembly-level instruction written by a programmer into the machine-level instructions that are actually executed by the CPU. Compilers such as the PL/M-86 translate high-level language statements directly into machine-level instructions.
Was that the D machine? Burroughs had a big push for high level language machines using microcode. I never used one of them, but I met lots of Burroughs people at Microprogramming Workshops.
High level language machines were quite the thing for a while, but they were never successful.
Not quite the same thing, but the reason Larry Ellison bought Sun was that he had the idea that the Sparc could be turned into a processor with hardware database support. That didn’t work either - except that it kept me employed just long enough to retire.
The 8086 had instructions added that were useful in the support of HLLs, but that is different from the Burroughs approach where the instruction set was specifically 100% targeted to supporting Algol, say. I think the microprogrammed Forth implementations were like this also.
I don’t really know. I met a lot of engineers from Burroughs that told me about their hardware, but except for one programmer who told me about using ALGOL I had no exposure to their software platform. The architecture did concentrate on multi-processing capability and used some unique hardware configurations, and at some point they were diving into current sensing logic. I did get to see a little bit of ALGOL on 360 machines, but there I mainly had to deal with conversion of ALGOL code to FORTRAN and deal with ALGOL’s funny parameter passing by name. Weirdly, years later, that turned out to be valuable experience.
All machines are different from each other, and the Burroughs approach was more different, since it was a stack-oriented machine – so the ‘instruction set designed for a high level language’ was more like you might think of as a pocket calculator language rather than a assembly language that looks like FORTRAN II
Normal ‘high level languages’ like COBOL and FORTRAN were designed for the instruction set of normal machines, so the match between high-level-language and low-level-language was just as explicit as with the Burroughs machine, for all that (originally) the direction of design was from machine->HLL rather than HLL->machine. The Burroughs stack machine was special because it did not target FORTRAN and COBOL.
Back then there was a real divide in language semantics. FORTRAN didn’t support reentrant code. Hard to imagine but there was no need for a call stack. Recursive algorithms implemented their own stack with an explicit array. COBOL was wrapped around the PIC declaration and character oriented variables. So Algol style languages really were very different under the hood. What we consider as a modern ISA is a compendium of lots of ideas. Even a simple RISC architecture contains features some would have considered unnecessary. The CDC 6600 is fabulous counterpoint to either the Burroughs or 360/370 approaches
It may be tangentially relevant that, when it comes to textbooks teaching the fundamentals of “serious programming”, the first editions of The Art of Computer Programming deliberately used a kind of 1960s-style architecture and assembly language, including mixed binary and decimal, punched-card I/O, etc., while in later editions it was deliberately replaced by a MIPS-style 64-bit RISC architecture including IEEE floating-point and so on.
Ah yes, what a great architecture. I read Thornton’s book on it in high school and it influenced me a lot. Maybe we could call it a problem directed architecture, since it was meant for scientific computing. I gave one a cameo in an sf book I’m writing (set in 1966) and I got to teach Cyber assembly language after then phased out the PDP-11 assembly language class.
Did Fortran lack recursion due to instruction set issues or just to keep the language simple - or because back in the mid-1950s people didn’t think about recursion much. Implementing recursion is much simpler in a machine with a stack, but it could be done without a stack, though painfully.
I suspect both. None of the early machines on which FORTRAN was first implemented had stack-oriented architectures AFAIK, which I would guess in turn was because recursion wasn’t much thought about back then. The concept of re-entrancy became important on the PDP-10 timesharing system where commonly used system programs were all re-entrant to save memory. Thus, there was only one copy of the pure code in memory for any number of users, in a shared area of memory called the high segment, while user-specific data and stack were maintained in the user’s memory space.
Perhaps it was a bit of both. I never used the Z80 stack pointer – it didn’t cross my horizon. FORTRAN was derived from a IBM 701 program – and the 701 didn’t have any index registers. But the IBM 704 was the first commercial FORTRAN machine – and also ??? the first commercial LISP machine, so it could be done if the language was designed that way. My idea is that there wasn’t enough memory to run recursive algorithms in FORTRAN – it just wasn’t useful. Even when I was using PASCAL and c, we avoided recursion except for demo exercises (“look! it does recursion!”) and compilers. Recursive algorithms always ran out of memory before reaching convergence …
Mine didn’t.
I agree that mostly recursion was for show-offs, but sometimes it made the algorithm much simpler. Like parsing, for instance.
My greatest teaching triumph was when, at the request of my class, I wrote a recursive factorial algorithm in PDP-11 assembly language on the blackboard during less than one class period.
Another reason why FORTRAN was “primitive” compared to more advanced languages: The designers were deliberately keeping the language stripped down to fit the size and capabilities of the machines of the day.
The committee that designed FORTRAN and the committee that designed ALGOL included a lot of the same people on both. When designing FORTRAN, they already had more advanced structures in mind (like if-then-elseif-else blocks, arrays with arbitrary number of dimensions and arbitrary subscript expressions, more elaborate for and case constructs, etc). But they left a lot of that out because they knew the compilers couldn’t fit on a lot of machines of the day.
OTOH, they designed ALGOL so machine-independently that, as designed, it could not run on ANY machines of the day. This, they left to the implementers. Thus, every implementation of ALGOL made different concessions to the machine being used, and for the character sets available. The result was that actual programs written in ALGOL were not easily portable.
Pascal made all the same mistakes as ALGOL and more.