Simple program for testing computer speed

I have an old Scrabble program that analyzes plays using Monte Carlo analysis. I’ve tested it on three different computers, and the results are strange: it runs fastest on the slowest computer, and slowest on the fastest one!!!

Since this is a rather quirky program, I thought I should find something explicitly written to test processor speed. Are there any free, quick programs on the Internet that do something simple like calculate pi to a million digits and tell you exactly how long it took for me to retest my computers and see what one is fastest? Thanks.

I’m just going to take a guess, but I think a possible explanation may be that the old program runs faster on the old computer because it was optimized for that hardware.

I’m not sure how you would be able to get meaningful results from a test if the hardware is significantly different between these three machines. You would have to consider a great many variables (graphics, memory, etc…) in addition to just processor speed.

What are the CPUs for each computer you tested? RAM? Motherboard? Graphics RAM?

Gotta eliminate the unknown variables before one can offer an opinion.

It seems like the program mentioned in the OP is a simple numerical analysis program, so it is not hardware dependent. Its execution speed will not be affected by things such as graphics.

The only thing I can think of that can cause that is the program was compiled by a compiler that makes use of some quirks in the old computer. If you have the source, try re-compiling it for the new computers.

Opus1, I suggest going to distributed.net and downloading their client, then running it in benchmark mode.
This won’t give an absolute measure of computing power, but it is nonetheless an interesting metric.
Study the results pages at http://n0cgi.distributed.net/speed/
They may not give you anything definitive, but if you look closely enough they’ll teach some interesting computer science lessons.
One lesson is that a computer which performs one raw computational task quickly may perform another computational task more slowly than a computer it beat in the first metric…

Benchmarking is a fool’s pursuit. No single number can possibly give you any valuable knowledge about anything more complex than a toy. General-purpose computers are too complex, and software can be optimized in too many different ways, for a program, or suite of programs, to be valuable for testing something as amorphous as `speed.’

What speed? The CPU’s clock rate? Worthless: Different opcodes take up different amounts of time, and the top speed of the bus (what the CPU uses to communicate with the world) probably relegates most cycles to useless waiting. Bus speed? Only marginally less naive: Your video or sound cards may or may not work as fast as the bus, and if the output peripherals can’t keep up with the main processor, that speed is wasted.

If hardware speed isn’t worth talking about, what about actual processing speed? Processing what? The digits of pi or e or sqrt(2)? Well, are you going to use it as a pocket calculator or a real computer? Graphics calculations? Sound generation? Text processing? All of those only measure one thing, and so are worthless as measuring sticks.

The only `benchmark’ worth mentioning is real usage. That’s all.

In the '90’s I used PC Magazine’s various iterations of their benchmark programme and was very pleased with their accuracy. They now have a whole suite of programmes, described on this page.

By way of history, I was running a bank of about 15 PCs with various characteristics; earch were running essentially the same programme over essentially the same data-sets … differences between the runs were normally just small tweaks to the parameterization.

I won’t say I lived and died by the benchmarking, but I was able to determine that execution speed was virtually linear with what PCMag called “CPU-mark” and virtually unresponsive to things like disk access times, which gave me confidence when buying the cheaper hard drives!

There was one great time when the magazine was testing some new machines and it became absolutely clear that the CPU-Mark score was greatly impacted by the speed of the L-2 cache … I forget the details, but it was something along the lines of Hewlett Packard putting 15 microsecond memory in the cache as opposed to industry standard 25 and getting a hell of a big bang for the buck as far as this particular score was concerned - which as I said, was the only one that mattered to me.

So, it came time to buy another couple machines and I get on the 'phone to one of my friendly salesmen…OK, I said, so I want this CPU … but how fast is the memory in your L2 cache? I had to explain to him that yes, I really wanted to know before writing a cheque, and after a few 'phone calls back and forth trying to get this information he just exploded “Look, it’s fast, OK???”

The order went to the other friendly salesman, who called his supplier and got the specs on just what he was selling me.

Sorry, I can’t resist telling that story.

I just realized I can expand a bit on Carcosa’s answer - possibly even usefully if your ‘old’ Scrabble programme is really old! Is it compiled with a 16-bit compiler? And is it running under DOS? At my old shop we kept DOS until the advent of Pentium 32-bit chips made it finally a sensible proposition to move to Windows-NT … so there was a three month period in which, basically, we moved all our number-crunching software from 16-bit compilations running under 16-bit OS’s with 16-bit CPUs to a 32-32-32 configuration.

If I remember correctly, one of the more dramatic illustrations of the desirability of change was the fact that our 16-bit programme running under DOS on a Pentium-200 ran far slower than the same programme under the same OS on a 486-133 CPU. Again trying hard to remember (this hasn’t exactly been front of mind stuff for a few years, you understand), I think is called stalling … and I had been expecting to see some effect, but not nearly as much as what was actually the case.

When we recompiled as a 32-bit programme and ran it under NT-4.0, then the Pentium-200’s speed advantage was even more dramatic in the proper direction. At this point I honestly can’t remember if we ran it as as a 16-bit programme under NT, let alone what the results might have been.

NT/2000/XP have a DOS emulator, not real DOS. So things are generally slower. If you ran it on 95/98/ME it should run fast.

I never ran 95/98/ME (or any other Windows version other than NT 4.0 and now W2K) so I have no direct experience with compilation/OS/CPU issues with those OSs. I believe, however, that W-95 and W-98 circumvented the processor stalling issue by thunking: 16-bit programme code was ‘thunked’ into 32-bit instructions prior to being sent to the processor - which avoided the stall but, of course, incurred a small amount of overhead on the conversion.

Windows 9x is still built on top of DOS, even if it’s called DOS 7.0

That generally happens when a 16-bit processor needs to fetch data located at a byte boundary instead of a word boundary. This doesn’t happen with 32-bit processors fetching at word boundaries AFAIK.

Yeah, the NT family (NT, 2K, XP) do thunking also, jiHymas. It’s slower because it’s emulated. The 9x/ME set are, as Urban Ranger said, running with DOS and so they can run full speed. I’d assume this is the problem with the program, rather than the 16bit issue.

I think CPU speed index programs would be a waste of time if you know the more recent PCs have considerably faster processors. The real issue is why the program runs slower on a faster hardware platform. A few possibilities (WAGs actually) I an think of -

If it’s DOS based IIRC, there are some quirks to the way some very simple old DOS programs are constructed in that they rely (in some fashion) on using internal software timers based on calculation cycles for part of their execution. Modern PCs are so much faster than anything contemplated that it confuses the hell out of the program and slows it considerably (if it runs at all).

Some older DOS programs are designed to use memory extenders (ie expanded/extended memory) and the way DOS boxes and DOS programs work under various iterations of windows may cause it not to be able to use or see available memory and it may just work (slowly) with a restricted memory space.

If the program requires a direct function call to the CPU to see whether it has a math co-processor onboard, the modern CPUs may confuse the program and cause it to default to non-coprocessor based calculation method which is slower.

It may only be the Pentium Pro that had the problem I’m thinking about … my shop was fully converted to 32-bit by the time the Pro’s successors came around.

I was able to find an article, The Intel Pentium II (‘Klamath’) CPU, which noted:

Unfortunately, articles explaining why DOS runs slowly on a Pentium Pro are now rather thin on the ground!