I have a Dell Laptop–4100 Inspiron. I was preparing photos (scanning, editing, captioning) when the procedure suddenly broke down–I got a message that the memory was low. This warning message came before I deleted extra copies of photos from the scan list in My Pictures. (After I did so, the out-of-memory message did not appear.) Do photos really take that much of a bite out of virtual memory?
It’s not so much the number of photos on disk, but the system trying to manipulate lists of them and/or display all the thumbnails.
Virtual Memory is the page file on your system. It usually occurs when you have many programs open simultaneously, using up all of your physical memory, as well as the page file.
Windows can dynamically alter the size of the page file temporarily so that you can get your work done, but because accessing the page file is incredibly slow compared to the physical memory, it is not advised.
Virtual memory and page files are different things. Virtual memory is the technique of having programs making memory reads/writes from a virtual range that windows assigns them instead of directly attempting to address specific physical memory addresses as they used to with older operating systems.
Page files interface with virtual memory when windows decides that the data in the virtual memory allocated to a program should go on a hard drive instead of ram.
From a technical standpoint that may be true, but with regards to general usage, particuarly with Windows, the terms have more or less become interchangeable. Probably because when you go to the “virtual memory” settings of My Computer the only options there are about adjusting the size and location of the “paging file”.
Virtual memory is a technique used by most operating systems today. Let’s say your computer has 1 GB of RAM. What happens if you tried to load more than 1 GB of data into it? Well, if you didn’t have virtual memory, your computer would come crashing to a halt because it ran out of memory. What windows and most other operating systems do these days is they take some chunk of disk space (let’s say another 1 GB) and use that as a “swap file” or “page file”.
Let me use an oversimplified example to show how this works. Let’s say your memory has 10 pieces in it, numbered 1 to 10. Now, in the old days, if a program wanted piece 5, it asked for piece 5, and that was that. These days it is more complicated. Now, let’s say you have 2 programs running. Program A asks for 4 pieces of memory and program B asks for 3 pieces of memory. Windows says ok, piece 1, 2, 3 and 4 belong to program A, and 5, 6, and 7 belong to program B. But here’s the thing. Windows doesn’t tell program B that he’s got 5, 6, and 7. Program B thinks he has pieces 1, 2, and 3. When program B asks for piece 2, windows really gives him piece 6. Each program’s memory has been “virtualized”. Each program asks for piece 1 or 2 or whatever, but it has no way of knowing if it’s actually getting piece 1 or 2. Of course, it doesn’t matter. Program B gets 3 total pieces of memory, and he doesn’t care if he’s really getting piece 2 or piece 6. As long as he gets a piece when he asks for it, he can’t tell where it really came from.
Now here’s another problem. Let’s say we start up program C, and he wants 4 pieces of memory. We only have 3 left (8, 9, and 10). In the old days, your computer would crash (out of memory error). These days, windows and most other operating systems will take a chunk of disk space and will pretend that it is memory. So, lets say windows takes space for 10 more pieces of memory on the disk. So now, windows has 20 “virtual” pieces of memory. But here’s the thing, only 10 of them actually fit into RAM at any given time. So, windows says ok, program 4 is using piece 8, 9, 10, and 11, but 11 is over on the disk.
Now, what happens when program C actually tries to access what it thinks is piece 4? Windows goes ok, that’s really piece 11, and oops, it’s over on the disk. This is called a “page fault”. Windows immediately stops everything, and he picks one of the pieces that he thinks hasn’t been used much lately (let’s say he picks piece 4) and he shoves that onto the disk. Then he takes piece 11, and shoves it into the “real” piece 4. So, your physical memory has pieces 1, 2, 3, 11, 5, 6, 7, 8, 9, and 10, and the disk has piece 4.
Now what if program A asks for piece 4 again? Windows doesn’t necessarily shove it back into the 4th slot. It may say that oh, 9 hasn’t been used in a while, I’d rather shove it there. So your physical memory ends up being 1, 2, 3, 11, 5, 6, 7, 8, 4, and 10, and the disk has piece 9 on it.
Each program is still asking for piece 1 through whatever, but the real location of that piece in physical RAM could be anywhere. Hence the entire memory is “virtualized”.
Now one thing to keep in mind is that swapping things to and from the disk is SLOW, so windows tries very hard to keep the most commonly used pieces in RAM.
Now, in our example here, we have room for 10 pieces in RAM and 10 more on disk. Now let’s say program A keeps asking for more and more pieces. Well, we’ve used up 11 so far and we’ve got 20 total, so the first 9 times it does this, no problem. The next time it asks, though, we’ve got a problem. All 10 pieces in RAM are used up and so are all 10 pieces on the disk. This is when windows goes aw shit, we ran out of swap space, and it pops up that annoying message about adjusting the size of your swap file. What it’s doing is it’s figuring oh, maybe if I increased the disk file size from 10 to 15 I’ll be ok for a while. Then I’ll have 25 total virtual chunks to play with.
In reality you’ve got a lot more than 25 total chunks of memory to play with, but I hope that illustrates the basic idea. If you bring up the task manager (ctrl-alt-del) you can see how much physical memory you are using, and how much of your swap file you are using as well. My system that I am typing this on has 1 GB of RAM, and about a 1.5 GB swap file, and it’s using about 0.5 GB of that swap file.
So, the way that you “get back” virtual memory is you free up memory somewhere. If you keep opening up more and more pictures, each of those pictures takes up some chunk of memory, and eventually you are going to run out. You can close out your pictures, but some programs actually keep your pictures in memory even when you close them, so closing the picture on the screen doesn’t always work. It depends on what program you are using and how it manages its memory. “Good” programs clean up after themselves, so when you close out your picture on the screen the memory it was using gets freed up.
Deleting files off of your disk doesn’t do anything for freeing up chunks of RAM.
Excellent post engineer_comp_geek (and others), thanks!
There have been lot of RAM tools during the years – are any of them worth the download?
It depends how they work. My guess is that they push “stale” memory into the page file to give the impression of “freeing up” memory for use.
The more aggressive you set the program, the earlier the program will declare a memory segment “stale” and shift it over.
What this means in practice, is that while you may have more memory for a new program run right afterwards, all of your other programs will dramatically slow down as windows literally hiccoughs for a second, finds the memory in the page file instead of the physical memory, and spends time putting it BACK into the physical memory (since it’s no longer stale).
Give and take. I’d rather spend $20 and buy another gigabyte of DDR2 RAM
The standard explanation seems to be “The Thing King And The Paging Game”, which is old and doesn’t reflect current OS design but, hell, I seriously doubt dougie_monty will ever need to optimize page faults:
Many of the “Memory Optimizers” try to reduce memory fragmentation, by forcing the allocation of large chunks of contiguous memory, and then releasing it again.
When the computer has finished starting and is ready to load applications, all of the memory is available in one big lump, called the free list. As programs start, that big lump gets broken up, with each program getting a small bit of the lump, and the big single lump in the free list gets smaller. At some point, an application will need to return some memory. This additional bit of memory will not be added back into the big lump, but will be added as a new small item on the free list. When a later request for memory comes along, the small free item may not be big enough to satisfy the request, so the big free item is broken down a bit more. Eventually, with requests and returns being made all the time, the free list is a long list of lots of small chunks of memory. Some of these items are next to each other (contiguous), and so can be merged back into bigger bits of memory that can satisfy new requests. This free list optimisation is an expensive operation, and the OS will put off doing it until it really needs too - after a large memory request that cannot be satisfied has been made. This slows down the starting of applications that want lots of memory when it has been fragmented. All memory optimisers do is use background cycles to request (and then free) big chunks of memory to trigger free list optimisation in the background, so that it does not happen when you don’t want it to happen.
OS designers work very hard to avoid these fragmentation issues (First Fit strategy vs Best Fit, merge on return, background memory optimisation), so I doubt that memory optimisers actually add much benefit to XP or Vista, and can slow the machine down. It was an issue for Windows 3.1/95/98/ME, where there were critical areas of memory (under the 1Mb limit) that were critical for OS/Application operation and at severe risk of fragmentation. Even then, the cost of running an optimiser (in terms of system reliability and performance) was often greater than the benefit returned (more apps running, fewer out of system memory/out of handles errors).
Si