True in general, but not in all cases. I work with CG & A/V applications.
Applications that are optimised for SSE2 will do better on the P4.
Applications that are floating-point intensive will edge out the P4.
Just head over to CGTalk.com’s hardware forum or 3Dluvr.com’s Techbits. In general comparisions between the top-of-the-line Athlon XP and P4s, both come on top depending on what type of instructions are being measured.
Not to mention pipeline length; Intel is achieving these higher clock speeds at least in part by shortening the individual stages of the pipeline, but increasing their number. As each instruction request only progresses one stage per clock cycle (assuming no cunning optimisation tricks like data forwarding are used) the actual time taken to process an individual instruction could actually increase, depending on the ratios of pipeline lengths and clock cycles. This is partly why early P4s were outperformed by nominally slower Athlons, which have a shorter pipeline.
dead badger, we are not concerned all that much with how much time it takes to process an instruction, but rather how many instructions can be processed in a second (ok we kinda care about both)
the longer the pipeline the more instructions being processed at same time … so we’re not losing anything, however we’re not winning either.
for a code that does not generate many hazards in the pipeline, it seems that the longer the pipeline the more efficient it is, as its processing more instructions at same time.
Moores Law started in 1958 and will come to its end in 2020. But it had a good run right? Theres also another law called the Law of Accelerating Returns. It basically just says that innovations in technology are going to occur faster and faster until you literally have things like computer speeds doubling every second. Where it goes from there we can only speculate. Computer speeds aren’t slowing down anytime soon. Sure the processors we use now are going to reach their limit but they will be replaced by things like quantum and DNA computers. And boy, they are going to be fast…
Currently the fastest clockcycle chip available in the PC market is an Intel P4 3.06 GHz. This chip has already been overclocked to over 4 GHz on a stable machine ( http://www.tomshardware.com/cpu/20021216/index.html ), which indicates to me, that even the current chip architecture has room to grow.
That being said, clockspeed is not the be all end all, as clearly evidenced by AMD’s slower clockcycle chips that are able to keep up in terms of processing power with Intel’s faster offerings. In an effort to make their chips more efficient, Intel developed hyperthreading ( http://www.tomshardware.com/cpu/20021114/index.html ) a new logic for their chips that allows a single chip to act as two virtual chips.
It is clear that as the transistor size shrinks, clockcycles will be able to increase, but the whole equation is a delicate balance of voltage, clockcycle, transistor size, heat disapation and command logic. In all likelihood, a revolution of sorts will occur that obsoletes the current way we think about processor speed. (In much the same way that the Pentium obsoleted the 486)
It won’t be long before we are using 64 bit operating systems, and not long after that before we are using 128 bit operating systems, CPUs will be scaled to match with faster, larger bandwith pipes, more ram running at faster clockcycles through fatter pipes and better peripherals.
Things will only get faster in terms of processing power, but as for us users, what will it look like? Most likely it will look the same. We take for granted these beasts of machines that we harness to do our labor. 10 years ago my 386 made me wait just about as long as my current machine does. 10 years from now, I’ll still be wishing for a faster machine.
It is already clear that processor speed can have only a minor difference in computer speed. At this point, disk access and memory accress are the limiting parameters. Faster memory chips, faster bus speed and multiple disk access are the only ways of significantly speeding things up. Back in the 60s, the Illiac 4, a research computer at the University of Illinois, had a disk stack with multiple heads. If the bytes are spread across many platters, each with several heads I imagine you could speed things up enormously. Whether such a thing could be put into a PC is another question. Or there could be a totally different way of storing data (holographic, maybe?). I am using an 850 Mh computer and I don’t see what difference a faster one would make to me. It will still take up to a minute to submit this reply.
I’m sure you’re referring only to those tasks which don’t saturate the current processors. It would sure make a lot of difference if you had to render a radiosity-enabled scene in 3dsmax.
Anyway, your example of reply submission is flawed. Those depend on network/site congestion + your bandwidth. The processing required is relatively trivial enough to neglect the computer.
But whenever you have a failed branch prediction, you must empty and wait to refill that long pipeline. That can waste more clock cycles than you gain in efficiency by having the longer pipeline. I guess that is what you are referring to by “hazards.”
The point I was trying to make is that, when we reached about 500Mhz+, it was clear that there wasn’t TOO much need for more speed for the common rabble. Still, the common rabble continued to upgrade their processors. When we reached about 1Ghz+, even the GAMERS didnt need much more speed. Now, only the people who do stuff which requires practically inexhastible CPU speeds need anything faster. While editing that photoshop image might take you a couple of seconds, just loading and saving the image would take far, far more due to the Hard Drive bottleneck.