Virtual Memory, RAM Disks, and Modern Operating Systems

Longwinded intro to question may be skipped, if you wish, by scrolling to the next line in red

Take a brief journey with me, if you will, to an era before virtual memory was quite so ubiquitous. I’ll refer you to the Macintosh platform, since my knowledge of PCs back then was even more dismal than what I know of them now.

People then got confused about the difference between “memory” and “storage space” just as they tend to be nowadays, but there was in fact a clear-cut difference. Storage space was a floppy disk or hard disk and it came in kilobytes or megabytes and they were cheap and slow. Memory — usually meaning random access memory, or RAM — was something held on some little electronic chips that snapped into slots, and while it also came in kilobytes or megabytes, hence confusing some folks, it was expensive and fast. If your local power company’s generators went out or your household circuit breaker popped on you, the contents of memory went POOF! and if you had unsaved work, well, sorry, remember to save more often from now on; but your storage space was right there and still had your (saved) files on it when the power came back on.

We did have RAM disks back then, if we wanted them. You could take a portion of your expensive, rare memory (RAM) and tell it to pretend to be storage space. That may seem like an odd thing to do, but remember: RAM is fast. If you had a disk-intensive process to perform and you had enough RAM to be able to afford to do so, you could copy the document to RAM disk and fly though it so much faster than if you had to keep reading new chunks of it from your hard disk or, worse, your floppy disk. (All disks are a lot slower to read from than RAM but floppies were like molasses in January).

In the early days of the Mac platform, there was a model called the Mac Plus that had no hard drive but could accept up to 4 MB of RAM. A floppy disk was 800K and dedicating 800K worth of your RAM to acting as a RAM disk was wonderful: you could even set up a control panel to automatically copy your OPERATING SYSTEM to the RAM disk and REBOOT FROM IT, spitting out the floppy and freeing up the drive so you could insert other floppies with your various documents on them.

Virtual memory came later. At first it was a bit of a gimmick on the Mac platform, something that wasn’t implemented well enough to be worth it, but the idea was to do the opposite of a RAM disk: to tell some of your cheap, slow storage space to pretend that it was RAM. Because it was, in fact, so slow, your computer would read the bits of it it needed to work with into real RAM, and to make room for it in real RAM it would write out some pieces you weren’t actively using at the moment to hard disk and then use that part of RAM for the new bits. Usually it would do this when you switched between two different open, running applications. Using virtual memory allowed you to have a lot more things open at one time. (On the Mac, it also made your system dog-slow, it just didn’t work very well. We heard that it was far better implemented on the Windows platform but PC users had problems with performance too)

OK, let’s leave the past behind and switch focus to the modern operating system, and modern specs. It is my understanding that all of the modern operating systems, MacOS X, Windows, and various flavors of Unix, do virtual memory quite well and rely on it so intrinsically that there’s no way you could turn it off, or would generally ever find a reason to want to. They’re just a lot smarter about “paging” inactive bits of RAM out to a “swap file” on the hard disk and reading in the bits you’re going to need, and doing so in such a way that you don’t generally experience it as a slowdown. It’s not an afterthought but instead is designed in from a very low fundamental level of how the operating system works.

So now, finally, the relevant bits. I work with someone who is under the impression that on his Windows server, a 3rd-party program that he runs to create a RAM disk results in a true RAM disk in the old-fashioned sense, a disk whose RAM is not at any time, under any circumstances, paged out to the hard disk. We were experimenting with this to see if a disk-intensive database process on a 6+ gig array of databases would run faster. It should have (the CPU cores were not saturated, indicating that none of the threads we started were feeding them as fast as they could crunch the data). It did not. My theory is that the modern operating system simply isn’t disposed to set aside a chunk of RAM and cease to address it with the virtual memory management scheme, because it’s doing that at such a low level that no 3rd party app can instruct it not to. (If it were a Microsoft product that had been designed from the ground up to do so, then yeah, I’d believe it, especially if it required a reboot after designating the RAM disk). Anyway, I’m guessing that the virtual memory scheme manages ALL of the RAM, including the bits occupied by the components of the operating system itself, and that it does so for the contents of the RAM disk along with everything else; that 6 gigs and change is too large a chunk of RAM to not be expecting virtual memory to read out and write back bits and pieces of it to the swap file as it goes, given that it is indeed a 3rd party program setting it up and that Microsoft Windows, on some fundamental level, is going to treat it like any other program and on a low level treat the RAM like any other RAM even while, at a higher level, it treats it as a disk drive, an F:\ drive or whatever.

Do you think I’m most likely right about this?

it didn’t run any faster because he was starving the program (and Windows) of real RAM, thereby forcing stuff to be paged out anyway.

look, the simple answer is that if pagefile performance is any way relevant to your workload, YOU NEED MORE RAM! Carving out a huge hunk as a RAM disk just causes other stuff to have to be dumped prematurely, and when the pagefile’s being thrashed it doesn’t matter what is being swapped in and out, your performance is going to take a nosedive.

even dumber is when someone makes a RAM disk and puts the pagefile on that :rolleyes:

Windows 9x had decent memory management so long as everything was 32-bit. if you had 16-bit code running about things could go pear-shaped. Classic Mac OS had no memory management worth discussing.

ETA:

I’m not sure if a RAM Disk can be “nonpaged,” so I’m not 100% sure.

It’s not clear why you talked about Macs and virtual memory. But, FYI, the first real virtual memory/paging computer was the British Atlas computer which rolled out in 1962. Alan Turing had earlier worked out an ancestor concept. So it predates Macs by decades. In fact, one of the oddities is that it wasn’t introduced into the personal computer universe earlier. (And then the first paging systems were, um, awkward.) Even 1980s CPUs were vastly more powerful than earlier systems that had it.

I’m with jz78817 on everything he said.

Do you know what third-party program he’s using? It might be helpful if we could have a look at what the program says it does, rather than what your colleague thinks it does.

If all else fails, you can direct him here: http://www.downloadmoreram.com/ :wink:

I’m not an expert on systems programming, but I do recall that it is possible to “lock” pages of RAM so that they cannot be swapped out. Look at task manager, performance tab, and there is likely a “nonpaged kernel RAM” which is what this is IIRC. Obviously, the core (kernel) should never be swapped out. Of course, windows loads a ton of exes and dlls, and things like printer drivers should swap out when they are not active. Simple rule, the less that’s locked in RAM the better.

Having said that, it’s possible this RAMdisk does in fact lock itself into RAM - maybe the author can give a clue. However, why should it? If you only use a segment of the disk - let’s say it has 2 databases on it and you are only using one - why not swap the other one out and reclaim that RAM for the programs? Why not swap out the empty space?

Plus, if you have a relatively small database - not sure how the disk caching on Windows works nowadays, but even with a regular disk, if there are a few files being used regularly, they will end up in memory cache and disk access latency will not be an issue. Fancier disks have the most recently accessed data in RAM cache too.

(MSSQL Server and Exchange, for example, have a habit of grabbing all available memory on a server and storing frequntly accessed pages from the database in RAM).

So, I’m no OS/memory optimization guru (at least not since MS-DOS days), but as I understand it you’ve got an operating system that is designed to use more RAM than it physically has, using a swapfile on the disk as virtual RAM (and is generally acknowledged to do a decent job if choosing what and when to swap). And you’re adding a program that takes some of the RAM to pretend to be disk space? Isn’t that kind of, well, counterproductive? I mean, you’ll end up with data in physical RAM that’s pretending to be disk space that the OS is using as a swapfile to keep data which won’t fit in physical RAM, right? Which is at best just adding overhead and at worst allowing god-knows-what kind of problems.

Maybe Windows95 didn’t do a great job of managing physical RAM and physical disk space, and could benefit from extra software, but I though the consensus was that any modern Windows does a good enough job that adding extra software won’t help.

Can we get some details here? How much RAM does the server have? What OS is it running (2003/2008, Standard or Enterprise) and is it 32 or 64-bit? How is the disk structured?

Ideally you’d have a lot of RAM in your server, an OS that can address all of it, and a database program that can load as much of the DB into RAM as possible. In addition, you want fast disk that you can read from and write to quickly. If there are portions of the database you want to read frequently, you can put them on solid state disks. If you need to be able to write a lot of data quickly, create a RAID-5/6/10 array with lots of spindles.

Help us understand the real problem and the environment around it.

it’s more like the amount of physical memory is independent of the amount of virtual memory. The OS and programs work within virtual memory, and it’s abstracted away by memory management which maps virtual memory into and out of physical memory as needed. The pagefile is just a “backing store” for memory pages which need to stay in virtual memory but have to be taken out of physical RAM for the time being.

I suspect that it’s possible to make a real un-swappable disk image in RAM. I also suspect that there are close to zero real-world reasons to make one.

I wouldn’t rely on a non-swappable RAM drive to make a decent database server any faster. Database software is generally already optimized to make efficient use of RAM and minimize disk access. It’s not like DB engineers have never heard of virtual disks or IO caching, and besides, intensive DB queries might take up significant chunks of RAM just as a “working area” with some fall-back mechanisms that use disk space when they run out of RAM. Going around that by reserving lots of RAM for a disk image is likely going to be counter-productive; the DB software can’t use as much RAM itself, so it has to access the virtual disk more (maybe orders of magnitude more) than it would use the real disk if it had more memory.

Keep in mind, that even though RAM-disk access is faster than “real disk access”, direct, real, random access to a large chunk of RAM is faster and much easier to optimize than going through a RAM copy of a file-system.

ETA: also, modern operating systems tend to use most if not all of the “free” RAM for IO caching, meaning that most of your often-accessed data will already be in RAM no matter what you do. Database servers are typical examples of programs that tend to grab lots of RAM themselves and implement more caching themselves, because DB disk access is hard for an OS to predict.

SQL Server does this. I assume it’s a standard system call for windows processes to lock pages if so desired so they can’t be swapped out… but beware of the law of unintended consequences. I do remember running XP on 16MB RAM. “Painful” comes to mind. The program using the RAMdisk data may not be locked, as pointed out above.

Short answer: Yes

But here’s a few points:

  1. The ability to lock pages in place (that is, prevent swapping-- something required to create RAM disks) is disabled by default in modern versions of Windows. An administrator, or a process with administrative permissions, can turn the feature on. (Meaning: this mysterious tool your boss has might have set that permission when installed, or perhaps it didn’t and it doesn’t actually create RAM Disks.) It’s also possible that, on a 64-bit OS, Windows won’t respect this setting even if it’s turned on… the doc is a bit vague. (“Locking pages in memory is not required on 64-bit operating systems”)

  2. I highly, highly doubt you could do do anything manually to increase or tune SQL Server performance, short of simply adding more physical RAM-- the engineers who design SQL Server know the Windows kernel and memory management like the backs of their hands, and if you try to “manually” increase performance by creating RAM Disks, you’re just going to be ruining their already-great automatic optimizations. It would be like hiring a great pastry chef, then “helping” him with your cake by spooning in some sugar when he wasn’t looking.

  3. SQL Server aside, I’d be surprised if you could increase even normal application performance with a RAM Disk in a modern Windows kernel. It’s really, really good at managing memory-- seriously. (You might be able to squeeze a percentage point out of Linux, but even then I doubt it.)

  4. The extremely long rant about Classic Mac memory management is completely and utterly irrelevant. Even if Classic Mac’s virtual memory implementation wasn’t awful (and it was awful-- but to be fair it was running on CPUs that lacked the support for it), that OS has been stone dead since 2001. Bringing it up here just confuses the bejeezus out of everybody reading your post. Modern Macs have memory management on par with Linux, if not quite up to Windows level.

Isn’t there a nitpicky exception for 32 bit windows on a machine with over 4 GB? I thought that was a way to actually use that additional memory.

There’s PAE, Physical Address Extension, which (theoretically) allows you to use more than 3.5(ish) GB of RAM on a 32-bit OS. Microsoft disables that, also, since many/most 3rd party 32-bit drivers will crash when they receive PAE pointers.

I think you can still force 32-bit Windows to boot with it on. No guarantees that your system remains stable, though.

(Interesting aside: since this thread started with a Classic Mac infodump, Apple actually did their own “PAE” hack when switching Classic Mac from 24-bit pointers to 32-bit pointers back in the mid-90s. For a long time, there was a control panel you could use to disable 32-bit pointers in case you had applications that weren’t compatible… of course that limits your RAM to 16 MB.)

32-bit Windows will always enable PAE if the processor supports hardware NX (no-execute.) Only on server versions will that also enable > 4GB RAM usage.

To be fair, that was only with the earlier 68k line. Once they moved to PowerPC (which fully supported real virtual memory) Apple was in too much of a mess to build in OS support.

Yes; I should have mentioned this. NX requires PAE, if NX is enabled on your 32-bit system, PAE is also enabled… however it will not increase your total address space, which is the bit that can go wonky. Thanks for the correction.

Specifically: XP SP2 and up, and Windows Server 2003 SP1 and up have NX enabled by default, if the NX bit is supported by the CPU.

I happened to be one of the few freaks who actually liked Mac Classic, despite its warts. And it’s virtual memory support did get better, although it was always complete junk compared to Windows 2000. When Apple turned Mac into NeXT, I moved to Windows.