Was Moore's Law *EVER* true?

Recent news articles have lamented that Moore’s Law seems to be slowing down – computers are no longer doubling in speed every 18 months. (Yes, I know the original Moore’s Law was actually talking about the number of transitors doubling rather than the speed doubling; but that’s not what most people understand it to me).
However, was Moore’s Law ever true in the sense that computers doubled in speed every 18 months? Sure, they got faster, but twice as fast? I don’t think so. (Ignore that new versions of software are slower than the old ones – even running the same version of the same software on two computers isn’t 2x faster).
A lot of computer performance actually depends on speed of your hard drive, but that doesn’t double in speed much, so the typical user running a web browser and Microsoft Office probably wouldn’t notice if the processor speed doubled cause the disk drive would be the bottleneck. But even ignoring that – is it even true that the processor alone doubled in speed every 18 months?

(Disclaimer: yeah, I know that measuring how fast a couple is isn’t as easy as it sounds. I know you can’t just look at the gigahertz.)

I think it’s generally applied to raw processing power. Using a completely compute-bound problem (like SWhet or FLOPS), and assuming all cores of the processor are used fully, Moore’s Law is pretty close.

And returning to the original definition of number of transistors, take a look at this chart from Wikipedia. It does seem to be amazingly accurate.

Honestly, for most of the 1990s, which were my formative years in computing, Moore’s Law largely held.
If you want evidence, try finding some raw compute benchmarks over the years.
You’ll see the 386, then the 486, then the Pentium era, the Pentium II, the Pentium III, then the Pentium IV (as far as Intel goes, that was a bad time…)

I don’t know much about non-i386 architectures, but it seems like the Pentium IV (begun Nov 2000) era was when Intel stopped being able to deliver on the promise of Moore’s Law.

Nowadays, instead of making TALLER builders with more raw CPU speed, Intel is being forced to just build wider buildings… your FPUs and ALUs won’t be faster, but by God, we’ll give you 32 of each and call it an upgrade!

This is important. Moore’s law does not state that computers get twice as fast every 18 months. Rather, it states that the number of transistors we can put on a chip grows exponentially. See Wikipedia for much more information.

Letter of the law, yes, spirit of the law, not so much post P-IV.

And, really, the problem with scaling apps sideways is that it makes things MUCH harder for developers to optimize for.
There are some jobs that are easy to distribute across 12 processors (cores, FPUs or ALUs as the case may be), some that are hard as hell to distribute across 12 processors and some that simply do not scale past 1 processor regardless of what you do.
Thus, as far as getting your “stuff” done, a 36-core CPU is frequently no faster than a 2-core CPU is, and only marginally faster than 1-core CPU.

The biggest advantage of that second core, by the way, is that sometimes you can put 99% of the first core onto your actual computing problem and offload system and background work onto the second core.