this is false; the files that make up the registry are demand-paged just like any other file:
Do you have a cite for this? The way you have described it seems odd. For example “Windows uses the original file image as the swap file”. Eh? A swap file or page file is disk space that is used as if it were RAM. Simply reading directly from a regular file is not really using it as a swap file. I see the similarity of the idea, but I’ve never heard anybody talk about it in those terms.
This article by Raymond Chen, a Microsoft insider who has or had an excellent blog that explained the reasons behind a lot of Windows design features that people would criticise, seems to explain the real reason. In a nutshell, it’s not that Windows can’t replace DLLs on the fly. It’s just that, having considered the implications, they decided it was a bad idea to allow it. Presumably that means they think the way other OSes handle it is inferior.
His explanation doesn’t make sense. If there’s something still referring to the old file’s methods, and it wasn’t updated, then it will still be referring to the old file’s methods after restart and still break. And if it was updated, why can’t it be updated in place, just like the DLL?
And, anyways, why is an updated version of the same DLL requiring a fundamental change in how that DLL works, anyways? Was not the entire concept of DLLs that they could be updated without affecting backwards compatibility? What good is updating a component that affects multiple programs if only one of those programs will be able to use that component?
I believe what he meant was that Windows does not load entire DLL’s into RAM (which would then swap to the pagefile.sys), but just loads pointers; then loads only the DLL subroutines as needed into RAM. This is the danger, that the DLL gets changed, a completely different routine or one that requires different parameters is substitutd for the expected routine. Obviously, if you change a DLL you may have to change ALL the DLLs , you probably should completely restart the underlying program. If you write a program that never assumes anytihng, under what circumstances should it check for changes and what does it do if it finds inconsistencies? Restart? what if the rror is NOT an upgrade, but a database issue, mising network drive, or one of a hundred other issues? Continuously restart? Quit after 3 tries? What if it’s a key part of the operating system like the disk access?
It is far, far simpler to simply reboot everything than to write for all contingencies.
it’s somewhat ambiguously worded, but it’s in effect true. Windows will not write code (exe/dll) out to the pagefile, only data. If code has to be evicted from RAM, it’s just dumped and if it’s needed later it’s simply re-read from the original file on disk.
A DLL is linked when it’s initially loaded. (That’s what the acronym means: dynamically-linked library.) If your program loaded Foo.dll version 1.2, it linked function Bar at address 377. Now Foo.dll is version 1.3, some code was added and function Bar moved to address 379, but your program has no way of knowing that, tries to call Bar, then CRASH.
The EXE doesn’t refer to the DLL’s functions by name; it refers to them by address. (At least, after the initial DLL load.) If the addresses change, the program crashes or worse.
If that’s the way you design it then yes you have problems.
There are alternative designs that can allow it, and yes there are lots of considerations to make it bullet-proof, but it certainly can and has been done.
Raymond Chen’s description in the blog is so immediately flawed that it’s not really worth writing it unless his audience was non-technical and he didn’t want to get into the low level issues and was just trying to give a flavor for why it can be a tricky to do.
But… if the subroutine changes function, reuires extra parameters, or whatever - there is no point in having an adaptable operating system that can still find BAR if, when it does, the program will crash because BAR originally needed parameter 1 and now needs parameters 1 and 2; unless your spec says every DLL always must be backward compatible with every possible earlier version… At a certain point, reboot is a lot simpler to keep running program and its subroutines in sync.
We had a program years ago that would crash randomly. Turned it used a DLL with the same name as another program (smething silly common like crt.dll, IIRC) so if program A ran after program B, then A crashed; if program B ran second, for some reason B did not crash on the DLL problem (path related?).
Some programs rely on other programs or standard DLL’s. This overload of multi-versons of DLLs from all over the map is what Windows deveopers refer to as DLL Hell, I think. (It’s been forever since I looked at WIndows development…)
It all depends on how you approach it and what level of changes you want to allow, etc. There are X% of cases that could benefit from a no-reboot update and Y% of cases that probably wouldn’t.
If you are changing the things you listed, then clearly you have to be careful and manage it somehow, that could be facilitated by the OS and some sort of versioning system, or it could be handled manually.
I’ve always wondered why Bill Gates was so against versioning with DLL’s, it seems like it provides needed flexibility.
So, you’re saying that the internal workings of a “dynamically linked library” are static addresses? Wow, a contradiction if I ever heard of one…
Technically, static offsets. It is an efficiency gain. You need to balance the rate of change of a dll (relatively slow) against the cost of doing an entry point lookup by name every time you want to use a function. In some environments (rapid application development/scripting languages) dynamic function name lookup is essential and part of the cost of the flexibility gained (self-modifying code, anyone). For most cases, link-once (on load/first use) with static offsets is certainly the best option.
You could have a signalling mechanism that allows the OS to tell apps that a particular DLL/shared library is being updated, allowing them to unload/reload the library. But the overhead would be significant and not used often, so people don’t bother.
If you want serious uptime while maintaining your systems use an OpenVMS Cluster - cluster uptimes of years
Si
“dynamic” means linked at runtime, not at compile time (which is static linking.)
Or simply have 1 level of indirection to get to the functions so they can move without altering access to them.
An extra level of indirection per invocation is one level too much.
Si
It looks like system reboots will be with a while longer, but minimized.
From a subscription newsletter put out by Brian Livingston (bolding mine):
Rarely yes, generally no.
I guess you should make your own OS, with blackjack. And hookers.
But about half an hour into your design phase, I think you’d come to the realization that Microsoft really does do a pretty damned good job of it overall.
I have written my own OS, a long time ago.
You’ll notice that I never made any absolute judgements about Windows design in general, merely on this particular point that it could be done differently - it’s not a herculean task, merely a choice with some effort.
Is it a choice that’s worth the effort? Not sure.
the real hard part is making a change like that without breaking every single piece of software which runs on Windows.