I’ve been running 100MHz 96Mb of SDRAM with an Athlon 900 for a few months now and have just bought 133MHz 256 Mb of SDRAM, so my question is
What is quicker - 352 Mb running at 100MHz or 256 Mb running at 133MHz.
I’m running Windows XP if that matters. It’s all working with 352 Mb at the moment so there are no motherboard issues.
I have very limited eperience with XP, but I would guess that it behaves like the earlier incarnations of the windows family. They generally mainly use memory for applications, and as long as you don’t run more programs than fit in the RAM, any more memory is wasted. Therefore I’d venture to say that on a windows PC that’s not overcharged with (concurrent) applications, less, but faster, memory is probably faster.
Running Linux, on the other hand, I’d probably take more memory, even at a speed penalty. Linux is very good at using ‘unused’ memory as cache, and any Mb you throw at a linux machine will improve performance.
Does your motherboard support asynchronous DRAM access (I forget the precise term)? If not, you’re still going to have to run your RAM at 100 Mhz (the motherboard doesn’t “know” the speed of the RAM, you have to tell it. And the RAM speed is the maximum speed; you can run it slower if you want to). The reason for this is that the Athlon 900 only supports a 100 Mhz (double pumped) bus. If it does (support Async DRAM), I’ve found that 100Mhz RAM overclocks pretty nicely anyway, so you may still be able to use your 100 Mhz RAM at 133 Mhz.
Either way, you may or may not notice a performance difference; it will depend on what you’re doing. If there is a difference it will not be dramatic.
256 mb of RAM operating at 133mhz would be the faster option for most people. Extra RAM is only good up to a point. If you routinely exceed your available RAM by loading many applications or a couple of very demanding ones then adding more will help. However, I see some people under the delusion that adding stupid amounts of RAM like over a gigabyte to their system is going to make it faster. Usually, it just costs more money and does nothing because it is very rare for the system to demand that much RAM.
I say that 256 mb is the better option because, while it is not a huge amount by today’s standards, it should be sufficient unless you run huge, graphics oriented applications or massive applications like large databases. You will benefit from the extra clock speed if that is the case.
However, if your system often demands the extra 96 mb then you might be better off leaving it at the faster clock speed.
Of course, the best option is just add more 133 mhz RAM if you really need it.
I say the following with no intent to flame or cause discord. I run XP Pro at home and operate a Linux server in a commercial context.
However, the WindowsNT, Windows2000 and Windows XP line of Microsoft operating systems do in fact have a highly evolved and efficient caching algorithim. Those operating systems are written by professionals, and Windows NT in particular was engineered by David Cutler, a hired gun that Microsoft brought in from outside to assure maximum quality.
Your assertion that Windows doesn’t use spare RAM as cache effectively would have been valid if made about Windows 3.11, Windows 95, Windows 98/98 Se or Windows ME.
For the record though, even Windows 3.1 did use extra RAM to cache information.
Getting back to the OQ, I would suggest that you open up the performance monitor and then use it to find out the Maximum committed memory. Theoretically, if it at no point touches 256,000,000 bytes, you’d be better off pulling those last 128 MB of RAM. Generally though, I’d stick with more RAM. Benchmarks of PC speed when going from 66 MHZ RAM to 100 MHZ RAM only showed a 15% loss of measurable performance, and the gap from 100 MHZ to 133 MHZ is smaller than that.