Please explain this tech geek joke

I didn’t say code quality is irrelevant, I said that in large-scale systems, architecture was much more important. And I have to add that with today’s software systems becoming larger and more complex than ever, that principle is more true than ever before. Particularly with today’s software developers throwing out applications that need 500 MB of memory to run, when similar things were written years ago that could produce similar or better results running in 1K words.

Some relevant commentary:

Among people who write about software development, there’s a growing consensus that our apps are getting larger, slower, and more broken, in an age when hardware should enable us to write apps that are faster, smaller, and more robust than ever. DOOM, which came out in 1996, can run on a pregnancy test and a hundred other unexpected devices; meanwhile, chat apps in 2022 use half a gigabyte of RAM (or more) while running in the background and sometimes lock up completely, even on high-end hardware.

… Apps are slower than they used to be. And exponentially larger without a corresponding increase in value. At the very least, there are optimization opportunities in almost any modern app. We could make them faster, probably by orders of magnitude. We could remove code. We could write tiny, purpose-built libraries. We could find new ways to compress assets.

Why don’t we?

Prokopov’s answer is “software engineers aren’t taking pride in their work.”

To be clear, the author of the article doesn’t agree with the simplistic “laziness” or “lack of pride” argument, but he does present some reasonable explanations for the decline in software quality. I’m going to throw in my own unconfirmed observation that the Microsoft culture of “good enough” and the development paradigms that they’ve promoted has a lot to do with it.

Sure, consider it done! :smiley:

Yes, here is a link:
http://www.jargon.net/jargonfile/t/TheStoryofMel.html
There are other stories/legends, like Seymour Cray writing the first bootloader for the CDC 7600, toggling it from memory via the front panel and having it work flawlessly on the first try.

I don’t know about DCL, but the fact that one company did stupid things doesn’t excuse another company also doing stupid things. If anything, it makes it worse, because it gave Microsoft an additional opportunity to realize that what they were doing was stupid.

Carrying on with geekihood, I could almost do that myself on a PDP-8, but usually needed a little reminder that was taped to the console. The boot loader was a very simple loader (called the “RIM” loader) whose job was to load a more sophisticated (“BIN”) loader that was more efficient. This was only necessary if the BIN loader in high memory had been overwritten.

Looking at the PDP-8 RIM loader makes me feel like a kid again! :smiley:

7756: RFC
7757: RSF
7760: JMP 7757
7761: RFC RRB
7762: CLL RTL
7763: RTL
7764: SPA
7765: JMP 7757
7766: RTL
7767: RSF
7770: JMP 7767
7771: RFC RRB
7772: SNL
7773: DCA I 7776
7774: DCA 7776
7775: JMP 7757
7776: 0

The numbers at the left are the locations in high memory, of course the mnemonics on the right were entered through the switches in binary (for instance RFC was octal 6032).

Those who still have a vague recollection of the PDP-8 instruction set might be able to figure out what this is doing, but even those who don’t might be able to infer it …

RFC is [paper tape] Reader Flag Clear
RSF is Reader Skip on Flag (skip if the reader is ready) – (otherwise loop until it is)
RRB is Read Reader Buffer
CLL is Clear the accumulator Link
RTL is Rotate [contents of accumulator] Twice Left
SPA is Skip on Positive Accumulator (in twos complement arithmetic)

and so on. One can infer that what it does is assemble a memory address read from the paper tape, deposits that address in location 7776, then assembles an instruction code read from the paper tape, and deposits that in the address held in 7776 (DCA I = Deposit and Clear Accumulator, Indirect). Then repeats that process until the entire BIN loader has been loaded.

Yes, kids, we called that “programming”. To us, a python was a dangerous snake, Java was an island in Indonesia, “C” was just the third letter of the alphabet. Real Men programmed in assembler or, at worst, in FORTRAN, and we picked our teeth with the splinters of giant trees we chopped down, before going home to our wimmin.

:wink:

Newb. In the real Olden Days, programmers didn’t go home to their wimmin. The programmer (singular) was a wimmin. You guys, meanwhile, wimped out and waited until after computers were invented before programming them.

Indeed, Ada Lovelace would have greatly enjoyed programming the PDP-8 in assembler and appreciated its elegant simplicity, a veritable paragon of how much can be done with so little.

In modern times, Grace Hopper has been much celebrated, but let’s face it, her major claim to fame was the invention of COBOL. Also reputedly the invention of the bug, although it turns out that term had been used earlier in similar contexts. But Hopper popularized the term, and the actual bug in question – a moth that had wedged itself into a relay in the Harvard Mark II electromechanical computer – was taped into the computer center’s logbook with the entry “first actual case of bug being found”, and is now in the Smithsonian:

On the subject of historical tech geekery, I just have to express my admiration for the architecture of the humble PDP-8 family of minicomputers. It illustrates a dedication to creativity and minimalism that is just unknown today.

For instance, the machine used just 12-bit words, in which 3 bits represented the instruction code, one bit was the indirection flag, and one bit was the page flag. This meant that only 7 bits were available for the address, so just 128 words. Not 128 megabytes, kiddies, not 128K. Just 128. One hundred and twenty eight.

It also meant that with only 3 bits available to designate the instruction, the machine could have only 8 instruction codes. And since one of them was dedicated to I/O, it was really only 7 instruction codes.

How could you possibly write anything useful in such a limited machine?

Well, you could. For one thing, the 128 directly addressable words were called a “page”, and you could have 32 of them in 4K of memory. And by indirect addressing, you could access all 4K from anywhere, and with the “page” bit, you could directly address another 128 words in page zero from whatever page you were in, so you could use that to keep common data. With additional hardware, you could actually have eight 4K memory banks – an awesome total of 32K – and switch between them using special I/O instructions, though still only able to address 128 words at a time plus page zero of that bank.

As for the minimal 7 instructions, they were well optimized. Instead of a “load accumulator” instruction, for instance, you had “TAD” – twos complement add – which served both to load the accumulator and perform an “add” function if it was non-zero.

But the most ingenious feature was instruction code “7”. This instruction ignored all the bit allocations for memory reference instructions, and used them all in an ingenious combination of what were called “microprogrammed” instructions which allowed the programmer to perform a wide variety of manipulations and tests on the accumulator register. Here’s a listing of them.. These so-called microcoded instructions is what really made the machine viable.

This thread reminded me of this one From notebook code to the actual program with C++ where the OP wanted to start learning coding with C++. The majority of recommendations was to start with something like Python instead of C++ which I agree with. I didn’t join the dog pile at the time but later thought of an analogy. If you want to learn to juggle you should start with bean bags (Python) instead of chainsaws (C++).

I’m not sure that juggling chainsaws is the best analogy. Sure, you can make some pretty bad mistakes in C++… but not so bad that you permanently and irrevocably lose the ability to juggle anything. In a well-designed environment, the worst that can happen is that you crash the program you’re writing, and maybe the IDE with it.

But is there some abstract reason why it is stupid for Microsoft, or Digital, or IBM to have different syntax than that used by Unix tools? It certainly might be if Unix compatibility was a goal, but that is or was not the case for MS-DOS or TOPS-20.

Being different from other OSes of the time was fine. But being different in a way that caused all sorts of other issues isn’t. Especially when they seem to have been trying to mimic the appearance, without mimicking the parts that were actually useful.

But it wasn’t different from other OSes at the time. It was different from Unix. But it was the same as DEC machines. And more directly, CP/M, which was itself influenced by DEC’s TOPS-10 OS.

Maybe in retrospect that was a bad choice. But if it was, it wasn’t exclusive to Microsoft.

MSDOS was meant, insofar as practical, as a UI clone of CP/M. I know substantially zero about the history of unix and its predecessors.

If CP/M had already diverged from unix, or vice versa, that wasn’t MS’s fault.

Unrelated to the above …

I am utterly mystitified about folks’ comments about MS-DOS more. It absolutely positively used the $STDIN to $STDOUT pipeline. It was not just a switch on dir and only dir.

Now there may be some confusion in that dir did, even from v1.0, support a /p for pause (or paginate) switch which accomplishes substantially the same UI result as piping the unpaused results of dir into more using the pipe operator.

IOW these two commands:

>dir C:*.* | more
>dir C:*.* /p`

accomplished the same result via totally different means. The more command could be used with any executable that wrote to $STDOUT. The type command was a frequent companion to more for viewing readmes, help files, and all sort of documentation or simply the blathering of authors publishing whatever on these new-fangled PC thingies.


In keeping with MS’es dedication to backwards compatibility, the dir in Win10 still supports /p.

>ver

Microsoft Windows [Version 10.0.19045.4598]
>dir /?
Displays a list of files and subdirectories in a directory.

DIR [drive:][path][filename] [/A[[:]attributes]] [/B] [/C] [/D] [/L] [/N]
  [/O[[:]sortorder]] [/P] [/Q] [/R] [/S] [/T[[:]timefield]] [/W] [/X] [/4]

  [drive:][path][filename]
              Specifies drive, directory, and/or files to list.

  /A          Displays files with specified attributes.
  attributes   D  Directories                R  Read-only files
               H  Hidden files               A  Files ready for archiving
               S  System files               I  Not content indexed files
               L  Reparse Points             O  Offline files
               -  Prefix meaning not
  /B          Uses bare format (no heading information or summary).
  /C          Display the thousand separator in file sizes.  This is the
              default.  Use /-C to disable display of separator.
  /D          Same as wide but files are list sorted by column.
  /L          Uses lowercase.
  /N          New long list format where filenames are on the far right.
  /O          List by files in sorted order.
  sortorder    N  By name (alphabetic)       S  By size (smallest first)
               E  By extension (alphabetic)  D  By date/time (oldest first)
               G  Group directories first    -  Prefix to reverse order
  /P          Pauses after each screenful of information.
  /Q          Display the owner of the file.
  /R          Display alternate data streams of the file.
  /S          Displays files in specified directory and all subdirectories.
  /T          Controls which time field displayed or used for sorting
  timefield   C  Creation
              A  Last Access
              W  Last Written
  /W          Uses wide list format.
  /X          This displays the short names generated for non-8dot3 file
              names.  The format is that of /N with the short name inserted
              before the long name. If no short name is present, blanks are
              displayed in its place.
  /4          Displays four-digit years

Switches may be preset in the DIRCMD environment variable.  Override
preset switches by prefixing any switch with - (hyphen)--for example, /-W.
>

Which is useful if you needed more advanced pattern-matching from dir. Say:
dir /s /b | findstr MyFile | more

findstr is case-sensitive by default, unlike dir. So there are times when it’s necessary (not to mention handlng regular expressions, etc.).

Same as the Unix “cat”. Also useful for a longer pipeline with a file input. Although you could use the < operator, sometimes it’s clearer to have a left-to-right pipeline that starts with a “type”.

So this weekend we went to a farm stand. In charge of the stand was a man that I would estimate to be in his mid-late 80’s. Clearly retirement age. We got to talking about how he found himself manning a farm stand and it turns out he’s a family member of someone who owns and operates the adjacent farm and was working the stand to give him “something to do”. I asked if he was a farmer too, and he replied that he’d retired almost 30 years earlier from a (global) pharmaceutical manufacturer, where he worked for many years in data processing. I asked if he worked with any older languages. He said he worked quite a bit with FORTRAN when he started in the 1960s. He called it an “elegant” language, not like the ones today. I’m guessing this was nostalgia pure and simple, as I gather among geeks here the language is not held in high regard.

“Elegant” is one of those tricky words that can have many different meanings. I can certainly come up with some standards of elegance, by which Fortran would rate highly. I can also come up with standards of elegance that are exactly complementary to those first standards.

Have any “geeks” here said anything like that? My impression was not that anybody here asserted that “Fortran sucks”, rather that for higher-level programming they preferred to use tools like Julia and SciPy and Octave.

Yeah, in MS-DOS you could type asdfasdf | more. VMS does not lack for pipes either, but I guess the idea is you can just run type/page=save asdfasdf.txt (Linux version would be less asdfasdf.txt, to be contrasted with cat asdfasdf.txt | less). I do not see a huge conceptual difference, only superficial details.

To put flesh on @Chronos excellent points two posts up …

FORTRAN was, back in the day, “elegant” in the sense that it had very few moving parts. Picture a 3-bladed pen-knife. Simple, effective, does just what it says on the label.

Conversely a modern language and its complete supporting infrastructure is more like a car factory: sprawling and loaded with lots and lots of inter-related moving parts. All of which take expertise to understand and use well. Not so simple; not so elegant.

Now try to build a fleet of cars with either the pen-knife or the car factory and tell me which process is likely to be more “elegant” in the doing.

I think that the biggest problem with Fortran is not the language per se; it’s just that, because it’s so old, a lot of what was written in Fortran was written before programmers collectively figured out what constituted good programming. Someone whose first programming language was Python was probably taught right from the outset about the importance of comments (both including them at all, and what to say in them), and appropriate situations to use various sorts of flow control constructs, and so on. Someone whose first programming language was Fortran was probably never taught those things, and may well not have been taught at all. You can still do all of those things in Fortran, of course, but in practice, if you take a program randomly-selected from the set of all Fortran programs, it’s less likely to have those things than a program randomly-selected from the set of all Python programs.

Agreed. It’s amazing to me how many smart people ascribe great significance to what’s merely syntax, not semantics.

The unix “everything is a stream” POV is of greater semantic power than CP/M & hence MS-DOS’s more piecemeal approach to data manipulation in general. And was a bleeding-edge idea 50 years(!) ago in 1970-something. Here in 2024 not nearly so much and there are plenty of places where the “everything is a stream” gets in the way of more modern conceptions of data manipulation and storage.

The MS Powershell idea that everything is a .NET first class object is a vastly more powerful semantic idea than Unix’es streams. With the limitation of being tied to .NET. But as that’s gone open source and OS agnostic it’s not nearly the limitation it was back in v1.0.

IMO YMMV etc.