Does anyone still program in Assembly lang. code?

You can write in machine code (and I have, for school), but that’s a) enough to drive you nuts; and b) really just a translation of assembly.

Hardware designers think of assembly as a very high-level language, and with good reason. If you’re interested in this sort of stuff, there are probably a few computer architecture textbooks out there that are readable by an interested layperson.

Steve Wozniak hand-assembled the ROM firmware of the original Apple II personal computer (the one with Integer BASIC built-in, introduced in 1977). He didn’t resort to ones and zeros though; he had simply memorized the hexadecimal codes of the 6502 instruction set. I don’t think he did it this way out of special fondness for the 6502 processor, though that might have been the case, but rather because he didn’t have an assembler or source editor to use.

Those who cavil over the distinction between machine language and assembly language programming, though well-intentioned and technically correct, often ignore this point. If your so-called “machine language” code is in hexadecimal or octal, you’re still cheating. Computers don’t understand fancy-schmancy things like hexadecimal or ASCII-encoded digits. That’s just a crutch. Your hexadecimal has to be translated into real machine language, real ones and zeros, just like everything else.

At this point in the discussion I usually notice that I’m the one who’s caviling, and I slink away, muttering. This I shall now do.

Well, assembly was a high level language for me when I was in grad school. I worked on microprogramming, which for CISC machines is the level below assembly. Microcode is a set of instructions that do basic operations that transfer data from one register to another, do adds, stuff like that. Machine language instructions are subroutines of microcode. When an opcode is decode, the machine branches off to the microroutine to handle the instruction. Many IBM 360s were microcoded, which is why they had instruction set compatibility up and down the line (so people like Enright3 can port to different 360 versions without recoding. That was one of the biggest selling points for 360s.) In fact, very early 360s had options to emulate older IBM machines like the 1401 through microcode.

My dissertation, by the way, was a high level object oriented language (Pascal based) to allow you to write high level microcode optimized for one machine that could still be ported to another.
Now, back to assembler. The first computer I programmed did not have an assembler. You wrote the opcode directly (and the machine did not use ASCII, so the code for a B just happened to translate into an opcode for a Bring instruction.) The big problem was that you had to branch to specific addresses. I wrote an assembler for it (the first I ever used) to allow you to use labels. The big difference between assembler and compiled language is that one assembler statement translates into exactly one machine language statement, while one HLL statement translates into (usually) several. Assemblers give you convenient data formats, and labels, and macro capabilities, but writing in 1s and 0s would not be any more efficient than writing in assembler.

BTW, as far as I know the world’s first assembler was written by my first advisor for the IAS machine, the one designed by von Neumann, sometime around 1950. von Neumann thought it was a waste of time. :slight_smile:

For RISC machines, like the Sun Sparc, assembler translates directly into the native code of the machine, one-to-one, so this is not correct. Even in microprogrammed machines the assembler is not translated into microcode, but interpreted by the microcode, so depending on the architectural level you’re targeting assembler is still native. The crucial difference is that assembler is a direct mapping into native instructions, and there is no optimization or other code munging from the translator. You are also dealing directly with processor resources, not variables mapped onto resources by the compiler.

I taught PDP-11 assembler for a couple of years as a TA. You know, you can write recursive assembly language programs. I did one in front of the class once. And when I was in college, the official assembly language class was taught on a PDP 1, the one where SpaceWar was developed.

I didn’t mean to imply that there was no reason for writing code for many platforms. I was just stating that in my experience I’ve had managers talk about the need for that, when (in my opinion) there was no real need. I’ve (almost) always worked for companys that developed software for their own use, or they had control over the hardware (i.e. main frames). I’ve never written “of the shelf” type software where something like that would be of more valuable.

I didn’t mean to imply that there was no reason for writing code for many platforms. I was just stating that in my experience I’ve had managers talk about the need for that, when (in my opinion) there was no real need. I’ve (almost) always worked for companys that developed software for their own use, or they had control over the hardware (i.e. main frames). I’ve never written “of the shelf” type software where something like that would be of more value.

I think you would be surprised how much it still gets used, and is actually the core of alot of systems. A coule of years back I was working at a LARGE insurance company. We were doing some work on the slick web front-ends and I got a new change request in so I had to start investigating how to implement it. It wound up being almost like an archeological dig. It went down from the new web front ends, to a C layer that was written in the 80’s, down to a COBOL layer that was written in the 70’s (this layer was actually the main logic processing layer), and then down into the heart of it which was an assemby layer that was written in god knows when. When I got down to that layer I actually had to go find someone that could teach me how to program in assembly. I had never done anything in it before, and the only people I could find that did know were at least 30 years older than I was at the time (which was early 20’s).

Moral of the story is that this code had been running in production for decades without issue and was the heartbeat of the whole system. As someone mentioned previously alot of the businesses that were early technology adopters (banks, insurance companies, airlines, government) all still have this stuff running their production systems and probably will be for a long while to come. The basic economic decision is that it works so why change it. It would probably also be astronomically expensive to try to rebuild all the functionality into a “modern” language, both from the time and manpower to build it and from the aspect of all the new bugs that would be introduced during the port and have to be fixed. That’s why IBM still makes good money on selling the big old mainframes to these companies as their old equipment dies and they need to put in “new” decades-old technology.

Speaking of Spacewar, I recall seeing a site put up to commemorate that. One of the original authors still had a listing around, so some current students took on the project, wrote a PDP-1 assembler, and wrote a PDP-1 emulator in Java to actually play the game. IIRC, they had to put in some timing loops to slow it down. Unfortunately, the Java app disappeared from the site, and I’ve lost the bookmark. Does anybody know if it’s still up on the Web somewhere?

Back on topic, I had an assembler class back in the late '70s that used a machine that needed a boot loader toggled in whenever you managed to crash it. with the tight scheduling, that came close to a fistfight a few times!

I’ve written in machine-code for old PDP 11/70s. These were known as “fat-finger” programs and were usuallu used for diagnostics. Fat Finger programs were a serious pain in the ass to write but they allowed you to do things like; set all of memory to 1s or 0s, fill with a checkerboard pattern, even make a “marching” pattern through memory. Most technicians had loose-leaf binders full of these things.
The other machine I did this on was a Navy computer made by Sperry-Univac. Not much of a computer by today’s standards but had an incredible front panel, with every register displayed in Ne2 pushbuttons.
One guy actually programmed a Star-Trek game in machine code. The display was an IBM selctric typewriter that printed out status messages and the XYZ coordinates and headings of all the ships. Unfortunately, the game would beat a Selectric to death in 4 games or less. :stuck_out_tongue:

Regards
Testy

IN the late 1980s, OOP was still a fairly new development (given that the oldest programmers always insist on using the most ancient techniques available to them) and almost certainly would not have made its way into the classroom.

So let’s say we’re talking about 15 years ago, or twelve cycles of the standard “speed doubles every 15 months” warhorse. So computers are about 4000 times faster now than they were then.

And you were programming in BASIC. There was no way to do anything remotely fast in BASIC on an early 1980s vintage computer (which is what I’m assuming your school had, unless they were rich.) I recall teaching myself BASIC on the old Atari 800. For certain video stuff, my book had me key in an assembly subroutine for certain aspects of display of a simple video game, since there was no way BASIC was going to be able to handle it in anything like realtime.

Now we have C++ and a host of other languages that blow BASIC and Pascal out of the water.

But programmers are die-hard fans of their first language. Fortran has been obsolete for decades, but you will still find people whose fortran you are welcome to try and pry out of their cold, dead hands. Same for Assembler. You can’t tell a hardcore computer engineer that there’s any better language.

Some compilers have been known to translate the source code to Assembly and that code passed to an assembler, but I don’t know how common a practice it is now. The original C++ compiler was designed to generate C code, which was passed to a C compiler.

What you need to realize is that Assembler is simply a chip-specific language that gives short mnemonic codes to the actual base-level numerically-coded operations the chip is capable of. To program in it, you need to not only be aware of what your chip can do, but how what you are trying to do breaks down in to a list of these operations.

Higher level languages put groups of these operations together into single line commands, which makes writing programs a lot easier, but unless you are intimately familiar with the compiler, you don’t really know in a high level language if you are performing the task as efficiently as possible.

Using assembly is one of those “limits” phenomena. If you are up against the limits of your technology, like a very old system, or a very new system, you find that writing your code close to the wires has benefits. You can be sure that your code doesn’t include unnecessary routines, because you don’t write them in.

Your C++ compiler might routinely include code you will never use. You might make specific choices to utilize your specific equipment optimally that a general purpose C compiler would never use. If you have extreme constraints on memory, speed, or program size, that can be very important. Of course you could write your own project specific compiler, but including assembler routines is likely to be a bit more time effective. (Assuming someone actually knows the appropriate assembly language for your equipment.)

Constraints on memory, or needed speed of execution, are seldom critical in main stream programs. Multiple gigahertz clock speeds, and multiple gigabyte memory banks give you a bit more elbow room than a Trash 80 had. Portability (mentioned often above) is pretty much not a concern when you write a dedicated controller for an orbital Mind Control Laser. So, you might well have to do at least some assembly programming. Other than Steve Gibson, no one writes assembler unless they pretty much have to.

Or, you might do it so you can casually mention register instructions in conversation among your geek cohorts. It’s kind of like talking about toggling the bootstrap sequence into the front panel, from memory. (in Octal, of course.)

Tris

“It is unbecoming for young men to utter maxims.” ~ Aristotle ~

FORTRAN is not obsolete! It is still an evolving language (FORTRAN-77, FORTRAN-90, FORTRAN-95, FORTRAN-2003) and is often the best choice for numerical programming.

Most UNIX compilers generate assembler, which is assembled by as and linked by ld. Even compilers that directly generate object code, usually have an option to produce assembler.

Assembler is still useful for microcontrollers and other applications with critical timing requirements. The Atari 2600 was a good example of this. The processor was responsible for generating all of the video timing. You had to count the clock cycles used by every instruction. There are still many applications that require that level of precision and control.

I knew that would get someone into a high dudgeon. :smiley: Thanks for taking the bait!

I made the mistake of saying that to a slightly drunk computer science major at a wine party once…

I recently completed a homework assignment for a 400-level class that required locating a FORTRAN Runge-Kutta solver to numerically solve a system of differential equations. It was perfect for this application, and most students in the class seemed to prefer customizing the FORTRAN code to attempting to solve the problem in Maple.

OOP predated C++ you know. Simula '67 was the first realistic OOP language. I took a seminar that involved designing an OOP language around 1975, and the language I designed for my dissertation, which was finished in 1980, used object oriented principles to make porting feasible. I believe Wirth’s Modula had some OOP principles also. So, by the late '80s OOP was a 20 year old overnight sensation.

From The Jargon File:

slop:

  1. The percentage of “extra” code generated by a compiler over
    the size of equivalent assembly code produced by
    hand-hacking; i.e. the space (or maybe time) you lose because
    you didn’t do it yourself. This number is often used as a
    measure of the quality of a compiler; slop below 5% is very
    good, and 10% is usually acceptable. Modern compilers,
    especially on RISCs, may actually have negative slop; that
    is, they may generate better code than humans. This is one of
    the reasons assembler programming is becoming less common.

Steve Gibson of Gibson Research Corp. http:\grc.com/ programs exclusively in machine language. His programs run like wild fire.

The guy whose office is next to mine still programs in assembly language (ca 1990’s embedded signal processors for radar applications). His experience is in great demand because there are still so many radars out there that fly with these processors. When I first started, we programmed in JOVIAL (sorta like Pascal, maybe?), but wrote patches in machine language.

Yep. Go into your code, jump out to some patch space you left in, add some more code and branch back. I haven’t done this in almost 20 years, but I recall MIL STD 1750A had opcodes such as:
74xx (jump forward unconditionally by xx)
7Axx (jump xx if greater than; you do this just after a compare instruction)
806a nnnn (load direct register a with contents of nnnn)
906a nnnn (store direct register a into nnnn)
and about two dozen others that comprised the bulk of the instructions we cared about. The “extreme” programmers would get to the complexity of defining subroutines (which had to push and pop the stack).

This was because the radar systems we worked on ran code out of EEPROM, and building the software was a 1 1/2 day job, whereas writing a 12 line patch took about a half hour.

Boy I’m glad that’s over with, but I sure learned a lot about processors in those years. It’s left me with the ability to generally fake my way around while looking at assembly listings from other compilers, which is invaluable when optimizing for speed (which is still done for “hard real time” embedded programming).

I still know plenty of programmers who poo-poo Object-Oriented principles. Old habits die hard in this biz. But there’s no denying C++'s impact on the field, and it’s largely due to the OO aspect of it.

C++ hit it big because a lot of people knew C and C++ is a modified version of C with some OO features added. It isn’t fully compatible with C, of course, and any C programmer worth his semicolon can write a C program that is perfectly standards-compliant and will not compile if treated as C++.

C++ is by no means an OO language the way Smalltalk, for example, is: C++ supports multiple paradigms, including generic programming (via templates) and standard procedural programming, with OO thrown in alongside the rest.

And OO isn’t a solution to all problems, or even most of them. In the words of Fred Brooks, there is no silver bullet.