CPU speed maxing out?

I just got a new computer and am underwhelmed by its performance. This lead me down to thinking about the different computers I’ve had.

1st was a 386 with (IIRC) 16MHz.
2nd a 486 with 60MHz.
3rd a pentium 90MHz.
Then I went through 133, 350, 766, 1.4 GHz and now 3.0 GHz (Intel, 775 socket with HT and 2MB cache).

All previous upgrades have shown a considerable better performance. Double the CPU speed and the performance doubles… more or less. This latest upgrade gave me about 20% boost (and I doubled the RAM too). Since I’m running the same software, that’s not what’s hogging the speed.

Now, for a number of years, we got to expect that the speed doubled at certain intervals and there was some kind of formula, saying that you’d got double the memroyfor the same money every 18 months. But lately, Intel and AMD have been spendding more and more time downplaying CPU speed and bringing up other things which seem less tangible.

So this leads to my idle speculation: Have the makers reached some kind of physical limit and are more or less unable to double the performance, even when they’re doubling the CPU frequency?

I believe that, at current, you would do better with a X2 hard disk than an X2 CPU. But I’m not sure about that. But hard drives are in need of some updates, and have not been advancing (in terms of speed) as well as the other integral parts of the home computer.

Not all applications are bound by CPU speed. More than likely you would get performance improvements by investing in more memory, or faster I/O, or a better graphics processor. When things were bound by CPU speed, getting faster chips would show an obvious improvement. These days, the CPU isn’t the limiting factor.

Operating systems also eat much more memory, so there’s more overhead and stuff going on that eats memory. Double the memory on that box, increase your swap space, and see what you get.

Well, that’s the thing - I run XP pro (which I did on the 1.4GHz) and I got a new main board, more RAM, more graphics memory (lots more), new S ATA hard drive. Everything was upgraded, the same way it was when I went to the 1.4 machine in 2001, but there is nowhere near the boost I got back then (and at that point, I went from W2K to XP).

Hard drives and RAM are the bottleneck. They’ve gotten faster over the last ten or so years, but nowhere near as fast as CPUs have gotten faster. The return on system speed you’d get by increasing the speed of your CPU really drops off once you start getting significantly faster than memory.

On a related tangent, wouldn’t be better to max out the RAM and disable the swap file. I mean I can put in up to 4 Gb of RAM in my PC. (Right now I am sitting at 1). That would get rid of the bottle neck witht he hard drive. Or do you NEED to have swap files. I always wanted to build a PC that the OS and installed programs loaded on Start up from an image and ran completly in RAM. I understand that in a power failure with out a battery back up you would be screwed but think of the speed.

The OS itself is a hindrance. Also, we are bumping up into the laws of physics limiting just how small features can be produced on silicon and how much power can be used before the thing melts.

Intel and AMD are moving to multiple core CPUs - two (or more) separate processors on a single chip. Rather than pushing to get a Pentium to run at 6 GHz, it’s a lot easier to grab two 3 GHZ chips and link them.

Maximum PC magazine just released their “Dream Machine” for 2005 - their annual no holds barred, money’s no object computer project. This year, they took a motherboard that holds two dual-proc CPUs running at 2.2 GHz each. (Apparently, 2.2 GHz is the top of the dual-core line right now.) Does that equal 8.8 GHz? Nope, and they were a bit disappointed. There’s really not an operating system available now that can exploit this potential power.

That sounds almost exactly like what one of my college professors said to me. He was quite certain we would never reach CPU speeds of much more than maybe 40 MHz.

Depends on what you’re doing with your computer…if you’re just doing word processing and other various office activities then I wouldn’t expect much. If you’re a gamer, especially playing newer games, you should definitely notice; or if you do video editing or some such.

No doubt clockspeed will continue to rise, but the rise won’t be as fast as it was in the past, and rise in speed has slowed down a lot in recent years. Intel released its 3.06ghz Pentium 4 back in Nov 2002; nearly 3 years later the fastest clockspeed P4 released is only 3.8ghz.

As others have said, Intel and AMD are depending on multi-core processors from here on out; expect clockspeed to creep up only slowly, barring revoluntionary new technology. Right now AMD has a big advantage over Intel in the dual core world; the K8 core was built from the ground up to be dual core, with an integrated crossbar that lets the CPU’s communicate with each other directely while Intel made a big mistake in how far they could push the P4 up in clockspeed (they thought they could reach 10ghz), and Intel’s dual core P4s are something of a hack, needing to hit the motherboard chipset to communicate with each other.

AMD also has a big power consumption advantage; AMD dual core systems use only slightly more electricity at full load than a comparable Intel system does while idleling.

My understanding from doing some light reading around this topic is that nowadays one of the major limits is that software companies just don’t have the resources or inclination to optimise their applications properly. The hardware companies are locked into a venomous arms race with one another and bringing out better and more sophisticated hardware every other week, whereas the applications just aren’t making use of these capabilities.

As an example, symmetric multi-processing is a decade-old technology in the consumer market, but hardly any applications are built to be able to use it. Hence 4 x 2.2Ghz cores runs like a 5 Ghz processor rather than an 8Ghz. Ditto with all the funky graphics hardware and so on.

So depending on what apps you are running, they may be wasting a lot of the horsepower available, or alternatively something might not be configured right.

And don’t forget that Word and similar applications are a bit light and fluffy by modern standards - I can run Office 2003 fine on a 700MHz laptop. Maybe what you are seeing is that the difference between fast and very fast isn’t as noticeable as the difference between slow and fast? :smiley:

This is certainly true.

I upgraded last year from a PIII 733MHz to a PIV 3.0GHz with 1Gb RAM. The basic stuff like word porcessing and web browsing doesn’t run much faster than before. But if i’m doing batch processing of large digital images in Photoshop, or encoding video for a DVD, the new computer is miles faster than the old one.

What precisely are you using the computer for that doesn’t seem much faster? And how are you measuring this 20%?

Look Here for understanding the MHz/GHz Myth.

Well, yeah, the clock frequency myth aside, I’m not comparing AMD and Intel, But I do expect that when I go from a P4, 1.4 GHz, to a P4, 3.0 GHz, to see a considerable boost in performance. Which I didn’t.
The 20% is a guesstimate, based on time for booting XP pro, Starting CivIII, doing tasks in Photoshop and Illustrator, watching video, transfering video from AVI to MPEG. Stuff I did on my old computer. The only thing that’s considerably faster is WinRAR (which I complained about a while back).
Moving from the PIII, 766 to the P4 1.4 was like going from a Volvo station wagon to a sports car. It rocked.
This move didn’t.

I agree with ultrafilter… sounds like hard drive or RAM access time are your speed bottlenecks, not the processor. (something like getting a rocket-propelled race car with rusty axels, except not quite as dangerous. :wink: )

It’s a little more complicated than you’re making it out to be:

First, you have to worry about what effect optimizing for the latest and greatest processor will have on your performance on older machines. If you’re doing any kind of mass market app development, most of your customers are going to be on mid-range or lower machines. You want to keep them satisfied so that they’ll keep giving you money.

Second, you want your developers investing their time in making features work rather than doing a bunch of optimizing, because that’s what your customers really care about. No one’s impressed by an application that crashes very quickly.

So the best approach is to rely on the compiler to do optimization for you. The problem here is two-fold: a) the compiler has to err on the side of being correct rather than being fast; and b) not all code adapts well to parallelized architectures (which is really what we’re talking about here).

The very newest technologies matter a lot for scientific computing, but the code has to be written specifically to take advantage of that, and then you lose the ability to easily port it to a different architecture. Again, if you’re developing apps for a wide range of machines, that’s just not a reasonable limitation to have to work with.

The fastest CPU isn’t going to make anything YOU do any faster. It will process MORE instructions, meaning MORE productivity. Whether your talking about AMD, Intel, or the 7 core Sony Cell, the gigahertz doesn’t mean it’s faster.

It DOES MORE. Get it?

You don’t see the smooth transitions and fast scrolling in IE between an 800 Mhz CPU and a 3.8 GHz cpu because THAT particular instruction set has already peaked, (for the most part of course).

EXAMPLE:

Apple tried the Cell Technology, although it was extremely fast at processing certain tasks, namely 3D graphics, it wasn’t efficient enough for other tasks such as Rendering non vector graphics, such as Hi-def rendering of Digital Video. The same goes for AMD. Tried and tested, but settled with Intel because of the CPU’s multi-tasking capabilities.

Not that AMD or Cell isn’t capable, just not as efficient. AMD has made it’s mark in the Gaming world as THE best CPU for gaming. Same goes for Sony’s Cell technology(as it should, seeing it can handle a theoretical 2 Teraflops of instructions per second as opposed to the WHOPPING 1 Teraflop that the Xbox 360 boasts!)

They all go about achieving a number to market. The higher the number the better. They know that from the Auto industry’s concept of boosting Horsepower numbers.

SO think of it that way. I bought a 200 hp Mustang in 1989, then I bought a 300 hp Mustang in 1998, then I bought a 420hp Saleen in 2004, all each getting more and more powerful as they gained in HP.

But they still do the speed limit, right?

Do you even know what a hertz measures here?

I *do[7i] get what you’re saying, and I didn’t go out and buy a CPU based on GHz, a lot of thought and reading went into it. But to use your analogy:
I had a two ton camper in 1989 and a small crappy Izusu (sp?) truck with 90 hp. Getting that camper up to speed limit took forever.
In 1995 I bought a Ford Ranger, with 200 hp. Now it only took 20 seconds to reach speed limit. In 1999, I got a Ford F250 and now we’re competing with sedans at the red light, hitting speed limit in 10 seconds flat.
My latest truck is boosting a whopping 600 hp. I expected it to pull the camper to 55 in 5 seconds flat (based on previous experience), but it takes 8 seconds. It’s faster than the F250, but not as fast as I had come to expect.