Increases in actual computer processor speed vs Moores law.

Moores Law says we double our processing speed every 2 years roughly. Why is it that 5 years ago i bought a top of the line iMac that had a 3.4Ghz processor and now the top of the line iMac only has a 3.5 Ghz processor? Shouldn’t the number be in the 7.something Ghz processing after 4 or 5 years? We have been stuck in the 3.somethings for many years now it seems although i know computers are still getting faster obviously. How does this math work?

Moore’s law said nothing about processing speed, it was about the number of transistor components per square inch of integrated circuit.

It’s getting very hard to build processors that can handle more than three and a half billion instruction cycles per second. Most of the processor improvements past that level are in parallelization, with motherboards that can maintain that kind of clock speed in each of several different cores, instead of a higher raw Hertz rating.

The “law” is still basically correct - computer processor speed actually is still rising at a steady rate, it’s just that it’s from adding more cores and more instructions per clock.

I joined this thread just to make that very point (but obviously I was too slow).

But this brings up an interesting question that I don’t know the answer to - is Moore’s comment (that the number of transistors we can build per square inch tends to double every year - later changed to every two years) still in effect?

For those who might have missed the recent story, IBM Research builds functional 7nm processor.

Giddy-up.

Roughly, yes.

Another sink for these extra transistors (other that addition cores) is bigger and more complex memory caches. Another rule of thumb I’ve heard is that if n is the current year, and n > 2000, there will be about (n-2000)/4 layers of cache on-chip.

The end of clock speed scaling is primarily due to practical limits for dissipating the heat. Power generation for CMOS goes up like frequency squared. Also, as we approach the end of Moore’s law, the transistors are getting leaky, leading to even more power dissipation. Multiple cores can buy back speed, but only if most of the computation is parallelizable. Otherwise, you fall victim to Amdahl’s Law, which says that you are ultimately limited by the part of the computation that has to run serially.

Yep. It was stated back in the days when memory chips were the state of the art ICs, not microprocessors.

Just to make it clear… the numbers you’re referring to (3.4GHz) are processor clock speeds and they refer to how many times per second the processor can perform a “cycle.” In other words, as chrisk says, 3.5 billion cycles per second.

So what’s a cycle? A cycle is the rate at which instructions are executed, which seems straightforward but the catch is how many instructions it takes a processor to do a task and how many instructions it can handle per cycle.

Back in the old 286 days, it took twenty cycles to multiply two numbers, but by the 486 days, processors could multiply two numbers in a single cycle - so the 486 would be 20 times faster even running at the same clock speed. This trend continues with modern processors doing more and more per cycle.

This is just one oversimplified example of how there’s more to computer “speed” than processor clock speed.

We’ve gone from the Cray-1 supercomputer costing about 10 million dollars in 1977 to smartphones you can get for free that kick the Cray’s ass. It’s hard to imagine what computing will look like in another 50 years.

I guess about a decade ago I heard that we were rapidly approaching a wall in processing speed not because of the design of the processors, but because of their size.
We were rapidly approaching a point where our desktop computers would not be able to run any faster because of the time it takes for data to move from the memory to the processor and back, and that once we hit that wall making the processors faster would be pointless, as they wouldn’t be able to run any faster than the information could travel.

By “rapidly approaching” I mean we should have reached that wall by now, so I suspect that’s what you are observing. They could build a faster processor for a PC, but since it wouldn’t perform any better why bother?
People who want to build even faster PCs are instead putting their R&D dollars into changing the architecture in a way that will reduce the distance data has to travel, and the rest of the PC industry is enjoying the fact that processors from a few years ago aren’t going obsolete, but are still getting cheaper every year.

No, that’s an bad assertion.
Light travels at about 1nS/foot. Electrical signals on a PCB are maybe 70% of that (1.3nS/foot). So, if the distance between the CPU and cache (the only memory that really matters for speed) is 1", you could have a theoretical memory clock speed of around 11GHz. And, the distance is going to be much less than 1".

Note that current memory clock speeds are on the order 1 GHz.

But, even if this was the limiting factor, you would just make the memory busses wider…

External memory matters, but not as much, and transfer speed to that have been improved through the use of high speed I/O and chipsets which handle memory transfers.

As far as size, I’ve been on more than one project which is bumping up against the maximum chip size (limited by manufacturing issues) and that if course is not a problem for clock speed, thanks to careful floorplanning and routing. You don’t send a lot of signals from one side of the die to the other.

The next logical thing to do would be to instead of trying to make a particular processing subunit larger, you’d instead install a vast number of them in parallel. After all, while 4-5 ghz may be an engineering limit right now, even if we could lift that limit and hit 100 ghz + with a diamond substrate or something, it would still be a limit - light can only pass through logic gates at the speed of light at best (about half or less in real circuits), the above problems with memory would make a 100 ghz chip very difficult to keep fed with fresh data, etc.

That’s what is being done now, but more parallelism is possible. You can stack wafers on top of each other, for one thing. Of course, you can only do so much stacking because you need to get enough amps of power in and heat out.

Anyways, the current direction of computer chip improvements has created a performance crisis, but it isn’t a crisis of chip speed, the problem is programming. Massively parallel chips that have hundreds of separate processors are entirely doable - you can buy them right now. The trouble is that it’s difficult to write the software. New languages exist that are supposed to fix this, the problem is sort of chicken - egg, where you need a critical mass of programmers using a parallelism friendly language in order to get enough money spent making the compilers and language reference and example codebases not suck. But you can’t get a critical mass of programmers to use the language if it currently sucks right now…

That’s the real problem. We have computer chips capable of incredible things, and a lack of software that can really utilize their full power.

Getting leaky? I wish. In my world they are leaky. 20 years ago there was a test technique which worked by measuring quiescent current, current after you turn off the clock, ASICs in those days had microamps of current, and defects would often cause spikes in the current, and thus became detectable. Quiescent current today is orders of magnitude higher. And leaky transistors are faster, so you have to balance power, speed and voltage. This is often set dynamically, with each chip getting a voltage just high enough to hit its performance target.

In addition to new languages, it has generated a great deal of renewed interest in old languages like Erlang, and pure functional programming languages like Haskell where operation sequencing isn’t important (so you can tell a mob of cores to just go at it).

Languages and toolkits that implement these concepts have been there for a while as they are applicable to general distributed computing already. Applying the same techniques to programs running on a single machine is what is new.

Languages aren’t the stumbling block - algorithms are.
Not all problems have an obvious way to parallelize their solution.

Depends on the problem in question. Most things that slow down a modern machine do have an obvious parallel solution.

Realistically, the biggest limitation on computing speed these days is network bandwidth.

The number of users who are limited by actual processing speed is very small, IMHO.

Program load times. Modern software is actually composed of many separate modules. Possible to load them in parallel, but rarely done because this can lead to many nasty bugs.

Video games - many modern video games have serious performance issues. Almost all modern game environments are subdivided into many separate software objects, from different portions of the terrain, separate enemies, separate physics objects bouncing around - but most implementations end up bottlenecking to a single thread in many places. Obviously possible to parallelize to a greater degree, just not done as often as it could be.

Oh, please.
Who loads programs these days?
I have machines that have been up for months, which have my standard suite of programs - email, internet, calendar, contacts, WP, etc, etc. loaded all the time.

Even Photoshop loads in 10 seconds, and I can keep that up and running if I wanted to, also.