Please explain this tech geek joke

There are some elements which aren’t taken as seriously as others. I’d be the last person to suggest an absolutist approach. But the basic elements have been incorporated into virtually every language at this point.

Looked at a certain way, COME FROM isn’t that terrible a way to kick off and synchronize multiple threads…

Oh, I don’t know. I’m enjoying it immensely. Listening to tech geeks discussing their trade is precisely what I had in mind. Already I understand more about some of the underlying principles of writing code than I did before starting this thread.

Geeks please carry on with your arcane discussions comparing the various structures of code and related conventions…

Back in my days of attempting to tame ancient, poorly-documented, poorly-maintained Fortran code, I discovered that there was in fact a command in Fortran that basically behaved like COME FROM. I can’t remember off the top of my head what it was, though, and it would probably trigger PTSD if I tried to look it up.

Now, this is taking the argument a bit too far. Consider that the problems they were trying to solve were (A) architecture within a single machine (at least initially), and (B) meeting the needs of both large-volume business purchasers and personal users, and (C) in a time when the “architecture” of most of this was still emerging (I’m talking, there wasn’t even an obviously dominant LAN networking standard yet, forget about the internet).

So for Microsoft consumer products, what you get as a home user is the Pareto solution that’s intended to be ideal for 80% of business use cases with the 20% covered by an overlay of IT support and consulting hours. For a personal use context, without that kind of support, it’s going to make you tear your hear out. That doesn’t mean it’s bad software, it was the state of computing and state of the market at that time. “Good architecture” isn’t what was needed, nor even really possible under those conditions.

If you think all Microsoft software is shit, I would say that (A) you’re not up to date on the full portfolio of MS products, and (B) unaware of the niche of the software you’re interacting with. I’ve seen a lot of MS code from the inside since my employer is owned by MS, some of it is what you’d expect, more a consequence of existing in a walled garden separate from the market, but a lot of it is quite good. MS wouldn’t be pulling down $70 billion a year just on Azure alone if it was shit, and I promise that Azure is architecturally more sophisticated and handles more intense distributed mission-critical workloads than anything you’ve ever worked on. So maybe just a bit of humility is in order here.

Especially since it is already established that Real Programmers do not need “programing languages” or other crutches for milksops anyway; they are perfectly capable of writing optimised bug-free machine code that runs on bare metal. I learned about this from the hacker Jargon File.

ETA: Following up behind @HMS_Irruncible two posts above me …

Legacy MS products also “suffer” from a fanatical devotion to backwards compatibility, warts and all.

Which means refactoring cannot proceed very far before coming upon some corner case where somebody somewhere took a dependency on some unnoticed or unplanned quirk or coincidence. They did that in violation of the development “contract” of documented versus undocumented behavior.

But MS has a jillion end user customers running tens of millions of distinct makes, models, and versions of PC software created by millions of 3rd parties. Most of which (for home or office) is written more by bricolage than by planning. Therefore MS is stuck continuing to behave quirkily since if fixing that in a new Windows version crashes e.g. Photoshop the world won’t blame Adobe for violating MS’s documented rules. Instead they’ll blame MS for breaking Photoshop.

Within the ginormous ecosystem that’s merely Office, the same applies but now it’s the multitude of add-ins, and locally developed spreadsheets, macros, Access databases, etc., upon which the world runs. Those things weren’t designed by reading the documentation and relying solely on what is stated as forever invariant. They were “fiddled-with” into existence and as a result there’s no telling which quirks they unwittingly take a dependency on.

There’s an entire long running blog by one of the early luminaries at MS documenting lots of instances of this stuff. It is remarkably responsible behavior by a major corporation that might be thought “too big to care”. If only FB, Google, etc., took the notion of not breaking stuff to the same fanatical extreme. The IT folks in teh audience know what I’m refering to, but here it is for the others: The Old New Thing (microsoft.com).

What is the best car to drive? There is just one answer, right?

Are any of the languages on the tee shirt in the OP considered archaic? Have any fallen out of use, or are not relevant to today’s programming?

If you are asking which languages are used by “coders” today, those are all currently top-10 programming languages, with all except C in the top 5:

It’s not at all clear to me how much corporate internal code is stored on GitHub. Both for software-centric corps like e.g. MSFT or Google, and for non-software-centric corps like e.g. General Motors, McDonalds, or Bank of America.

So this might be a very nice set of stats applying to a rather small subset of the actual language “market”.

As well, that metric is by “pull requests” which is GitSpeak for posted software revisions. Which does not speak at all to the total volume of code in existence, only the number of times at least one line is tweaked.

Some PRs represent teeny hobbyist diddling and some represent major revisions to substantial codebases.

I was about to add, there are clearly many important languages not on that list (e.g., Ada), so not being included on that list in no way implies a language is not relevant to today’s programming, but the languages on the list are evidently bog-standard highly relevant mainstream languages at this instant in time.

Another list: (IEEE, top 5 there are Python, Java, C++, C, Javascript, in that order.)

Of course, once you start talking about corporate internal code, you’re probably also talking about legacy code. Somewhere buried deep in the infrastructure of any company of any age, there’s some legacy program that everyone in the company depends on, directly or indirectly, that was written in COBOL or Ada or original-recipe Fortran or something similarly god-awful, but the only person who ever knew how it worked retired thirty years ago and died ten years ago, and of course they never wasted precious bytes of storage by doing anything as inefficient as documenting it, and everyone’s afraid to rock the boat by trying to replace it.

Which means that, when something does go wrong with it, someone has to go in to fix that mess anyway. And the longer it goes before that happens, the worse the mess is, and the harder it is to find someone with even a hope of fixing it.

First of all, I’ll acknowledge a certain amount of hyperbole in my comments, and that the task of writing an OS for a vast array of heterogenous third-party hardware over which one has little or no control is not easy. Nevertheless, what I said reflects a basic general truth. Yes, Microsoft has been very successful, but that’s largely because their relentless dedication to the principle of “eh, good enough” has turned out to be, well, literally good enough. Which is very different from a dedication to excellence. And also because they were the only game in town, having little competition and ruthlessly quashing it iwhen they did, even if that competition was IBM.

Consider, just for a few examples, all Microsoft operating systems prior to Windows NT and the consumer line even after NT, until its eventual merger with the NT kernel in Windows XP. Windows 3.x and its variants were usable most of the time if you were willing to put up with bugs and crashes, and that’s about all you can say for them. The initial release of Windows 95 was so buggy that I wondered if the product might be withdrawn entirely. It got better, but was still prone to crashes. So was Windows 98. Windows ME was a buggy disaster. It wasn’t until the aforementioned merger of Windows NT with consumer features to create Windows XP that the Microsoft consumer OS was at least stable and not crash-prone, even if still buggy in less obtrusive ways.

Consider also how poor Microsoft’s venture into the browser space was. Almost any browser was better than Internet Explorer. Or how buggy Office was, and continues to be. And even in the post-NT integration era, how terrible Vista was. Microsoft’s whole problem is the “good enough” mentality, the absence of testing at boundary conditions, and in fact the obvious lack of adequate testing at all. “Good enough” is practically the opposite of a culture of excellence.

I was at a seminar once given by some engineer from Microsoft, and I politely inquired about this “good enough” philosophy. His response was that they put software through robust testing in areas where it really matters, such as file systems. Maybe they do, yet FAT32 had a bug (fortunately it was recoverable) where under certain conditions it would decide that the disk was full. The one really excellent file system out of Microsoft was NTFS, but that’s only because it came out of Dave Cutler’s Windows NT team, as I mention below.

I strongly disagree that any of these things represented “the state of computing at the time” (unless you mean specifically the state of computing in the PC space). I cite as the counterexample DEC (Digital Equipment Corporation), which truly did have a culture of excellence. Their phase review process reflected an amazing dedication to product quality and predated modern concepts of software development and project management methodologies. Even the first beta release of the VAX/VMS operating system – the beta – had a rock-solid stability that Microsoft can only dream of.

I will also disagree with the statement that “there wasn’t even an obviously dominant LAN networking standard yet, forget about the internet”. The ARPAnet began in 1969. Ethernet goes back almost as far, although it wasn’t commercialized until around 1980. It’s true that it wasn’t clearly dominant because IBM was pushing token-ring for many years, but it was DEC that championed the clearly superior Ethernet. And incidentally, as the ARPAnet evolved, DEC was developing their own peer-to-peer network, DECnet. DECnet was technically superior to TCP/IP and certainly far superior to IBM’s hierarchical SNA, but sadly the rapid emergence of the internet made TCP/IP the de facto WAN standard. But my point is, all those technologies go back a long way, and were being developed by visionary companies like DEC while Microsoft figuratively sat in its Mom’s basement playing with blocks.

And a word about Windows NT, the first really good operating system from Microsoft and the foundation of all its future ones. There was a brilliant software engineer at DEC named Dave Cutler who had designed the RSX-11M operating system and then adapted and extended some of those ideas into the amazing VAX/VMS OS. Microsoft hired Cutler and tasked him with developing a brand new OS for the business market, one that would have the stability that their current OSs lacked. The excellence of Windows NT probably could not have been developed in the Microsoft culture of the time, nor probably even today. It took the injection of DEC’s culture of excellence to do it.

So yeah, with certain caveats and broadly speaking, Microsoft software is, at best, “good enough” and tends to be poorly tested at boundary conditions.
.

I feel like Microsoft also suffered some from what might be called cargo-cult design, especially in the early years. A lot of MS-DOS commands, for instance, look superficially very similar to commands in common Unix shells, but are implemented in very different ways below the surface, and those differences in implementation lead to all sorts of ripple effects.

For instance, Microsoft decided to use the forward slash for parameters to a command, instead of the hyphen usually used in Unix tools. But this use of a forward slash meant that they couldn’t use the forward slash in file paths, which they dealt with by using backslashes instead. But backslash is traditionally used in many computer programs as an escape character, which means that in a lot of contexts, the backslashes in Microsoft file paths need to be replaced by double backslashes to work correctly. But of course not in all contexts, which means that folks working with the system constantly had to figure out what they needed in any given context.

Another example: In Unix shells, one common command line might be something like ls|more, which gives a listing of the contents of the current directory, one page at a time and waits for a keystroke to show the next page. On a Unix system, this is achieved by ls and more being two different programs, and the | character means “take the output from this first program, and pipe it into the input of this second program”. This is a very versatile and flexible design: You can feed the output of any program into more, or use it by itself, or feed the output of an ls into some other program to process it in some other way, and so on. But in MS-DOS, aside from the irrelevant change of the command name from ls to dir, they changed the “more” piece from being a separate program, to just being a parameter passed to dir. So that one specific command still works, but you can’t take apart the pieces and use them with anything else.

It’s a lot. Conspicuously not Google, as they very publicly built their own version control system, but there are many traditional businesses such as automakers, heavy equipment manufacturers, telecom carriers, so-called “tech giants” that host their enterprises on GitHub.

A PR is just an approved change, implying that someone submitted it and another person approved it. Generally you won’t find a one-man shop doing PRs because it’s too much hassle just for self-approval. So a PR is a decent approximation of a changeset that was important enough to be gated by a basic approval process.

What I’m saying is that while GitHub PRs are by no means a complete picture of the market, you can take it as a representative slice of what people are collaborating on. And yes it may be somewhat skewed more toward independent projects, but keep in mind Linux was once a hobby project too, so GitHub metrics have the benefit of hinting at where the market might be headed, not just where it is right now.

Come on; you cannot blame Microsoft for using forward slashes for parameters or for changing the “more” piece to a parameter instead of a separate command. DCL has exactly the same features.

It doesn’t matter if you build the “best” system in the world, if customers don’t feel it meets a business need, then it’s worthless. VAX has all but vanished, while Microsoft is literally the most valuable company in the entire world. It hosts an enormous chunk of the world’s running Linux systems on Azure.

You can tell yourself this is thanks to dirty tricks or theft of secrets or whatever, but it’s mainly just figuring out what businesses need, and delivering on it. It’s not that hard to write amazing software to target a niche use case, but to build a workhorse that can thrive in all sorts of different environments and contexts, this speaks to skill in bridging relationships between users and machines at a realistic price point.

I have my criticisms of Microsoft, but it’s just cranky uninformed atavism in the year 2024 to say that the literal largest software company in the world is “crap”. 200 billion a year in revenue says that’s a silly thing to say.

There’s a bunch of goalpost-moving going on here. I was talking about software quality, and you’ve digressed to the topic of business success. I agree that the two aren’t necessarily correlated, and that “good enough” is sometimes the best business decision. I recognize that reality, but as a technologist who values excellence as a virtue in its own right, I find it sad and regrettable.

One finds the same thing in other industries. Of the traditional Big Three US automakers, Ford by some metrics has been the most successful, and the only one that’s never needed a government bailout (although they recently took out an EV incentive loan). Yes, Ford, makers of such marvels as the exploding Pinto (responsible for at least 27 deaths), the insta-rust Maverick, the dual-clutch Focus, the worst SUV ever made – the Flex, the stall-prone Escort, and lots of other crap.

And BTW, the only reason the VAX isn’t around any more is for the same reason the IBM System/360 isn’t around any more – all computers quickly become obsolete. But at the time it was a very successful and very popular family of midrange computers whose engineering excellence was renowned. So much so that IBM tried to compete with it directly, spawning several different computer lines like the 9370, informally dubbed “VAX killers”, none of which stemmed the success of the VAX, much less killed it.

You do have a valid point, though. DEC isn’t around any more and Microsoft is thriving. The reasons for DEC’s demise are complicated, but a certain engineering arrogance was admittedly part of it. As a very successful manufacturer of mini- and midrange computers they were slow to see the potential of PCs. As the developer of DECnet, they correctly saw it as superior to TCP/IP, and saw the future of wide-area networks as, not in TCP/IP, but in ISO/OSI, and put massive resouces into extending DECnet into the OSI space, possibly one of the biggest business blunders ever made. But this was also a time of a rapidly changing technological landscape and the emergence of open systems – a new landscape that almost killed IBM, too.

Eh, I can only conclude that we live in a sad world where the best technology often doesn’t prevail, and it’s often because businesses make stupid decisions. Like businesses who continued to throw millions of dollars at IBM when DEC could have provided them with far better products at less than half the cost. Part of the reason was fear of venturing into unfamiliar territory. The old adage at the time was, “nobody ever got fired for buying IBM”.

We were all talking about code quality until you digressed to the quite eccentric position that code quality is irrelevant because architecture is the only thing that matters, and from that a further digression into how the most valuable software company in the world is utter crap compared to a vendor that died in the mid-1990’s.

If you want an armistice on pointless digressions in this thread, I’m glad to co-sign and I’m sure many others would too.

If there was a worse way, the inventors of INTERCAL would have used it.

The Story of Mel?