Why no new operating systems?

IMHO, one of h things that killed OS research was Linux. And that was pretty sad.

In a previous life I did real world OS research, I have my name on a few research papers on new OS designs, and have met quite a few of he players in the OS research community. There was a golden time that peered out in the mid '90s. Up until then there was a lot of active work, and a lo of new ideas. In addition to some named above, there was work like Choices, Clouds, V-kernel, Grasshopper and more. A common problem all faced was the difficulty in getting a usable environment up and running on top of the base OS abstractions. There is a massive amount of code needed that isn’t research. The usual tactic was to port a big slab of the BSD services, and the Gnu toolset. Which of course was what Linux did. (Despite his faults, I agree with Richard Stallman that Linux is correctly called GNU/Linux. No Gnu, no Linux.)

But things changed, and a lot of the steam left the research community as Linux was just so easy.

There are other things that matter. The nature of an OS is about the abstractions that are provided. A lot of the work in the 90’s focussed on variations on the common abstractions. Plan-9 took the Unix name space to is logical conclusion. There was a lot of interest in parallel programming support from the OS. OS400 was perhaps the one that had the legs to deliver something new in its persistent programming paradigm. It is a great shame it has been forgotten. And therein lies the problem. The generally accepted abstractions an OS provides have, for all their faults, pretty much been accepted as the ‘right way’. To deliver an OS with abstractions that are actually delivering something truly different faces a massive battle, simply because there are billions of lines of code out there that are written assuming things like ‘file systems’ as the mechanism of persistence, and monolithic isolated virtual address spaces as the unit of computation. And so on. This is reinforced by a monopoly of computer architecture design. Not just x86, but computer architectures don’t provide support for interesting ideas in OS design. Tagged memory? Hardware capabilities? Nothing new in these ideas, but without any hope of changing the dominant blandness new OS ideas are hard to make work in a worthwhile manner. Multix is another name missing from discussion. But it needed some hardware support.

Where interesting things were happening were in the areas of distributed computing. And I would argue that that is still where the interesting stuff is. Sure, people will argue that this is layered over the OS. Well it is if you only look a the individual node’s OS, but if you look at the entire distributed system as a single computational resource, you are now looking at the abstractions that control and manage that single resource, and that IMHO is the operating system. I may not need to talk to the individual bits of hardware on each node, although in high performance systems it often will have back-doors to get what it needs done efficiently.

But no matter what, it is pretty thin out there. IMHO there is a lot that could be done, but the way research is done in the modern world does not reward that long game, and that is what is needed here. There are very few companies that have he will and the resources to put into it. IBM were once the big dog here. No more. VMS came out of DEC, which lives on in some tiny corner of HP. So no chance there either. Microsoft are but a pale shadow, and never developed anything new anyway. Google could, but won’t. Apple could and should, but I doubt they will. Amazon curiously have contributed more, but again, no value to them to put big effort in.

Remember, Unix got is big boost when DARPA put money into it, and BSD came out. That effort saw companies like Sun and SGI kick started with a base for the OS for their hardware. It needs something like this to really get things going.

I think that’s just a natural effect of a market having “matured.” Look at the minicomputer and game console landscape of the late '70s and early '80s. So many wildly different platforms all built around the 6502 architecture (Atari VCS, Apple II family, Commodore 64, Atari 8 bit, NES, etc.) all of the differentiation came from each computer maker’s unique support chips to do the real heavy lifting for sound, graphics, and input. Ditto the Amiga later. That all went away once CPUs became powerful enough to do all of the work.

By “high-end” I meant the highest salaries alluded to on job recruitment sites such as ziprecruiter or review sites like glassdoor. Have you been with your company for a long time? What would you say is a good average salary for systems programmers, and how many production lines do you think the average system programmer (with good direction and management) put in over time?

I ask that I may revise the $534.8 million price tag I affixed to a new operating system in [POST=21877367]post #21[/POST].

~Max

Lines of code is one of the worst metrics you can use. Another dumb one is bugs per lines of code. They used to track that here. I deleted a file which contained about 2,000 lines of dead code. The next day a QA manager asked me why I deleted it since it increased our bugs/line and the VP would demand an explanation of why our quality had gone down suddenly. Stupid.

How would you calculate the cost of developing a new operating system?

~Max

Twenty dollars, same as in town.

I don’t understand, sorry.

~Max

It’s the punchline to a very old joke.

Note that you guys are talking past each other a little on the compensation numbers.

“Total compensation” generally includes things like health benefits, 401k match, stock options, bonuses, etc. Not just salary.

The way you get to total compensation in excess of $300k is generally via stock grants/options/bonuses.

That said, I believe that $70k is way low. Reasonably skilled starting developers got more than that in Silicon Valley a decade ago.

Also, I agree 100% with jz78817 that the easy to use consumer OS is iOS.

IBM has been very good about understanding that backwards compatibility is a big thing. I spent a lot of time in the IBM midrange arena and got spoiled by stuff just working and decades old software being able to run unchanged in the new environment/system/hardware etc.

Love your post. In a previous (career) life I was a VMS programmer and I loved it passionately. There hasn’t been anything since then that made me as excited to come to work. Anyway, not long ago I was feeling a bit nostalgic and looked it up, so I have sad news for you…

HP has exclusively licensed (nice bit of legal trickery there*) all of the VMS assets to another small company calling itself VMS Software, Inc. Might not be all bad, as HP completely wasted it for many years anyway. The footnoted snark is because I can’t tell if VSI is a subsidiary of HP or is a separate for-profit company or is some kind of non-profit.

Your point still stands that it seems to now be a museum piece, except for the few installations that are still running/being supported.

For clarity, you are saying you can’t estimate the cost of developing a new operating system? Not even a reference range?

I’m a little bit dense when it comes to new old jokes.

~Max

I can’t. I don’t think you can either. You’re certainly not going to get a useful range by making up a number of lines of code and multiplying it by an hourly rate.

I’m still not convinced that Linux was as important in this process as you say, and I’m pretty sure we’ve talked about this specific thing before.

1991 Linux ported a lot of GNU stuff to a very simple monolithic kernel. It wasn’t a new idea, a fact the “Linux is obsolete” debate made very clear: Tanenbaum wanted to slam Linux for not being a cutting-edge microkernel design, whereas Torvalds was both sick of MINIX’s performance problems and utterly uninterested in trying to turn his terminal emulator into a research OS.

(And despite “let’s-make-everyone-happy” BS to the contrary, Tanenbaum was dead wrong about all three things he predicted back then.)

GNU was important to getting Linux running, but there’s a lot in a finished Linux system which isn’t GNU and never has been.

I can’t find any faults with any of this except to say two things:

One, Linux didn’t do this. This was fait accompli when the BSD variants became the de facto OS to port to all new workstation designs in the 1980s. Linux isn’t even all that different from the BSDs from the perspective of pure OS design.

Two, OS design has advanced, but at the pace of evolution as opposed to revolution. Linux is a good example of this: It supports things like cgroups with namespace isolation, which is the technology underlying Docker images and which is the apotheosis of chroot() jails as lightweight but pervasive containment based around namespaces in a way reminiscent of Plan 9. It’s nothing hugely new from a 5,000 foot perspective, but it gets the ideas to people who can use them because the ideas are in a system which supports their existing use cases, as opposed to being wrapped up in a pure research OS which doesn’t support any use other than research. Also, and this may be more of a programmer talking, eBPF is an interesting example of a non-Turing-complete language being used to solve real problems, including the problem of avoiding runaway plugins.

Multics was interesting, but it seems like most of the interest comes from it having been an early system which pioneered so many ideas we take for granted, like hierarchical file systems and memory mapping and ACLs. Multics seems to be an example of an early OS project the lessons of which have been fully internalized.

Does the GNU project still maintain the fiction of having a complete OS as their goal? In practice, of course, the “GNU OS” is Linux, even though it obviously can’t be, because Linux Is Unix.

I believe they are still working on Hurd.

If you navigate to gnu.org, there is a big fat link inviting you to download (any one of several flavours of) GNU/Linux, some of which are sponsored by the FSF. You can download and run GNU/Hurd, but it is described as being not a stable version (of course, anyone unhappy with the pace of development that is free to volunteer to work on it themselves, donate a trillion dollars to the project, or whatever).

ETA there was some friction between Bushnell and Stallman, which apparently didn’t help the development of the Hurd any, and neither has the subsequent decades-long stagnation.

They’re still working on Hurd, but they’ve given their official OK to certain Linux distros as living up to their standards as regards user freedom. This is one of them.

As for Linux being a Unix: Technically, Linux is just a kernel, and this is one of the few places that distinction matters. A Unix can only be a complete system, a distro in Linux terms, which has passed a certification process and for which a license fee has been paid. Only one Linux distro qualifies: Inspur K-UX, which I’ve only heard of in the context of being “that one Linux someone actually paid to turn into a Real Unix” and absolutely nothing else.

There’s a part of z/OS which qualifies as Unix and FreeBSD doesn’t. It doesn’t matter. Nobody cares about that trademark anymore. In common parlance, “Being a Unix” is a spectrum and Linux distros are more towards the “Unix” end unless someone’s gone to a good deal of trouble to turn the userspace into something alien.

Here’s Wikipedia’s page on OSes. Knock yourselves out.

Ones I noticed:

Google’s new non-Linux kernal OS Fuchsia. Might be a future replacement for Android/Chromium.

Some of the recentish gaming consoles had their own custom OS. Xbox, Xbox 360 but not Xbox One (Windows 10). Wii U. Sony’s PlayStations are BSD based and therefore part of the Unix world.

BeOS lives on, sort of, as Haiku.

Then there’s Cosmos. A toolkit for building OSes.