IMHO, one of h things that killed OS research was Linux. And that was pretty sad.
In a previous life I did real world OS research, I have my name on a few research papers on new OS designs, and have met quite a few of he players in the OS research community. There was a golden time that peered out in the mid '90s. Up until then there was a lot of active work, and a lo of new ideas. In addition to some named above, there was work like Choices, Clouds, V-kernel, Grasshopper and more. A common problem all faced was the difficulty in getting a usable environment up and running on top of the base OS abstractions. There is a massive amount of code needed that isn’t research. The usual tactic was to port a big slab of the BSD services, and the Gnu toolset. Which of course was what Linux did. (Despite his faults, I agree with Richard Stallman that Linux is correctly called GNU/Linux. No Gnu, no Linux.)
But things changed, and a lot of the steam left the research community as Linux was just so easy.
There are other things that matter. The nature of an OS is about the abstractions that are provided. A lot of the work in the 90’s focussed on variations on the common abstractions. Plan-9 took the Unix name space to is logical conclusion. There was a lot of interest in parallel programming support from the OS. OS400 was perhaps the one that had the legs to deliver something new in its persistent programming paradigm. It is a great shame it has been forgotten. And therein lies the problem. The generally accepted abstractions an OS provides have, for all their faults, pretty much been accepted as the ‘right way’. To deliver an OS with abstractions that are actually delivering something truly different faces a massive battle, simply because there are billions of lines of code out there that are written assuming things like ‘file systems’ as the mechanism of persistence, and monolithic isolated virtual address spaces as the unit of computation. And so on. This is reinforced by a monopoly of computer architecture design. Not just x86, but computer architectures don’t provide support for interesting ideas in OS design. Tagged memory? Hardware capabilities? Nothing new in these ideas, but without any hope of changing the dominant blandness new OS ideas are hard to make work in a worthwhile manner. Multix is another name missing from discussion. But it needed some hardware support.
Where interesting things were happening were in the areas of distributed computing. And I would argue that that is still where the interesting stuff is. Sure, people will argue that this is layered over the OS. Well it is if you only look a the individual node’s OS, but if you look at the entire distributed system as a single computational resource, you are now looking at the abstractions that control and manage that single resource, and that IMHO is the operating system. I may not need to talk to the individual bits of hardware on each node, although in high performance systems it often will have back-doors to get what it needs done efficiently.
But no matter what, it is pretty thin out there. IMHO there is a lot that could be done, but the way research is done in the modern world does not reward that long game, and that is what is needed here. There are very few companies that have he will and the resources to put into it. IBM were once the big dog here. No more. VMS came out of DEC, which lives on in some tiny corner of HP. So no chance there either. Microsoft are but a pale shadow, and never developed anything new anyway. Google could, but won’t. Apple could and should, but I doubt they will. Amazon curiously have contributed more, but again, no value to them to put big effort in.
Remember, Unix got is big boost when DARPA put money into it, and BSD came out. That effort saw companies like Sun and SGI kick started with a base for the OS for their hardware. It needs something like this to really get things going.