is Moore's law slowing down?

I bought an i7 930 not too long after they were out. Since they were already a a midish range filler there really wasn’t too much of a ‘best thing available’ premium anyway.

I can’t find the receipt of when exactly I got it, but I know it’s been over a year. I was thinking this is about as long as I have gone without drooling over some better chip I could potentially buy. But there really doesn’t seem to have been much advancement in the past year. The available chip in the same price range now seems to be the 950. Which is certainly not as much of an improvement over my 930 as Moore;s Law would suggest.
I know the Gulftown 6 cores have jumped to the very high end of the processor prices, but the mid-high end seems to have stagnated.

The global recession can’t be doing any good for research budgets, but is it that are we finally coming to the practical edge of diminishing returns on what a single CPU can do, or is it mostly market demand?

I remember an EE professor back in 96 saying that processor speed was practically at the wall then because of…cue hour long talk of electron speed, circuit length and resistance… And about 10 times since then the wall was about to be reached.

Moore’s Law isn’t a straight line, but a jerky one. Just when things look like they have reached their limits, there is a breakthru.

Maybe it’s like punctuated equilibrium (Gould).

That doesn’t mean there always will be a breakthru, but it’s dangerous to extrapolate short term trends to long ones.

Not yet. And it seems relatively likely that by the time we’re approaching that point, photon based CPUs or other witchcraft will come in to keep the curve going.

And, really, the idea of multi-core design is probably more extensible than how we’re currently doing it. A modern multi-core CPU is still trying to fit the general form factor of a regular CPU, slotted on a traditional motherboard. It’s possible that they’ll start slotting multiple CPUs onto a single motherboard, or putting dual motherboards in a PC case. So long as you can keep the machine cool, and figure out how to split tasks out to different hardware components, you can continue to ramp up the total CPU power of a machine up to the point where PC cases have to grow to an unwieldy size.

Remember Moore’s Law is not about processor speed. It is about device count on a chip. It was one of Moore’s colleges that noted at the time Moore made the observation about density increase that this would have the effect of improving processor speed in a related manner.

We can see what the man himself thought.

Dual socket motherboards are pretty common, even for desktops. More than this typically you migrate to more expensive processor chips that support the more complex cache coherency needed. Blade server systems pack a lot of cores into a small volume, and indeed, heat is the issue that holds back packing density. There are some pretty amazing high density server offerings. SeaMicro offer six Atom chipsets on a small card, and pack 768 cores with 1.5TB of memory into a 10U high cabinet.

I think people are confusing Moore’s “Law” with any improvement. The original was a simple “The number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years.” This is either dead or close to it.

Finding new technologies or workarounds to the barriers continues. In the long run I’m sure we’ll see much more innovation from Moore’s Law failing than from trying to keep it going.

Nobody really knows how long Moore’s Law will last. Just within the Wikipedia article on it, there are estimates that it will cease to be true anywhere from less than 10 years from now to about 600 years from now:

All the fact that you don’t want to have a faster computer shows is that you’ve reached the point where your computer does well enough what you personally want it to do.

Yea, I think rather then reach the end of Moore’s law, we’re reaching a point where the maxim: “Moore giveth, Gates taekth away”. Software has reached a point where the demand for cramming in new abstractions and features is slacking, its “good enough” for the public, and they’d rather stick with something their old hardware can handle then go update to a new chip just to run a marginally better product.

We saw this with Windows Vista, which needed fairly new hardware to run well. The public took a look, and largely decided to stick with XP.

Obviously hardware still eventually gets outdated, but I find I’ve been pretty happy waiting 4-5 years between updating my hardware (and then usually only a few components), while I used to find that every 2-3 years I’d find my old system was already running into programs that needed more processing power to run well.

The hard facts now days are that the thing you are waiting for, if in fact you are actually waiting for anything, are not things inside the box on your desk.

When we get significant increases in bandwidth across a more intelligently implemented internet, and cut out the phone companies proprietary outlook, there will be an actual need for R&D on the obvious other bottle necks, Heat, and two dimensional design. Both are big walls.

Tris

Yep. Especially since it was coined when there was no such thing as a “microprocessor”. Back then, it was memory, memory, memory that dominated the IC industry. It’s still a big part of it, but everyone knows that “memory is free”.

I don’t think Moore was talking about memory. Admittedly before my time, but my understanding is that most memory in 1965 was magnetic cores, which aren’t ICs (looking at wikipedia, they were apparently built by hand by seamstresses threading the wires through!).

Your right he wasn’t talking about the then non-existent microprocessors though, but just integrated circuits in general.

:smack: I should know that. I was thinking more of the early Intel years (1968) and the time when the term “Moore’s Law” was actually coined (1970). When Moore first phrased his future law, he was still at Fairchild, and since he’d be looking backwards, memory products wouldn’t be the players they were 10 years hence.

Recent NYT ARTICLE about hitting the limits of powering all the transistors on chips.

I don’t think anyone is seriously predicting that Moore’s law will continue for 600 years. The paper you referenced is about cosmological limits on the total amount of computation possible in the universe, establishing a ridiculously large upper limit on Moore’s law. What drives Moore’s law is the economic value of making smaller transistors. Enormous investment in R&D has always paid for itself because the dollar value of the IC’s has continued to go up even faster. By projecting out those trends, you find that the fuel behind Moore’s law can’t last more than a few more decades, because the semiconductor industry will exceed the extrapolated Gross World Product. Economics will likely bring Moore’s law to an end before the physical limits, but those are very real and will bite us within a decade or two.

While Moore’s law (more transistors per chip) continues unabated at present, a physical limit has radically changed the way new microprocessors are designed. A few years ago, each generation of microprocessor went at a higher speed. The first microprocessor had a clock speed of 740 kHz, modern chips have speeds several thousand times higher. That is no longer happening. Manufacturers have been forced to just add more transistors without running them at higher speed, not because the transistors won’t go at higher speed, but because they dissipate too much energy if you switch them faster. Thus, the high end is going toward a larger and larger number of cores, rather than higher speed. We’re only beginning to learn how to effectively use multiple computers, despite 50 years of work on parallel computers.

The exciting thing is that there is still plenty of room for improvement and many advances yet to come, but it is certainly inevitable that Moore’s law will come to an end. Our expectation of exponential improvement in electronics will become a distant memory as the industry eventually becomes like the steel or concrete business, with continuous but slow refinement. Take heart in the fact that as the number of transistors per chip reaches a plateau, we’ve still got several decades of progress figuring out what to do with all those transistors.

Simplicio beat me to it in commenting that Moore’s prediction came at a time when it was not all about semiconductor memory. To put it in context, the very first semiconductor memory chip (as far as I know) came out in the same year (1965). It had 32 bytes of read-only memory. Each of the 256 bits was set in the factory, individually by a technician breaking links on the chip under a microscope.

I still remember how excited we were 10 years (mid 1970’s) later to get a read-only memory board for our PDP 11/45 (hot stuff at the time). I think it was about a thousand bytes, occupying several square feet of circuit board and consisting of several thousand glass diodes, each soldered individually in the right pattern to encode the boot loader so we no longer had to read it from a paper tape, after manually keying in the paper tape loader using a set of toggle switches!

I salvaged one of those in our museum collection. Also have the 11/45.

There are a whole set of competing issue that make raw individual processor speed difficult. Memory is still a limiting issue, but everyone keeps thinking just in terms of memory capacity. But the speed of memory is it limiting factor for raw processor speed. Hence the need for a hierarchy of caches. The speed of level one cache is one of the critical determinants of processor clock rate. If you can’t feed the CPU with operands, and get the results out again fast enough, the CPU can’t do anything useful. The speed of cache is limited by the time it takes the tags to perform their work, and this is limited not just by switching speed, but by signal propagation time. This is one reason why you don’t see L1 caches getting much bigger.

The relationship of processor speed to device count was made by one of Morre’s colleges (whose name eludes me right now) and was really based upon the observation that (certainly in the 60’s and 70’s) you could make a range of computers with much the same technology, and that the more devices you had, the faster you could make it compute. Back then many computers were microcoded, and very simple machines could be built, but still able to execute the same instruction sets as much larger more complex and faster brethren. The x86 is similar in this respect, although it isn’t microcoded anymore. However the ability to keep adding smart features to the implementation to make things go faster eventually reaches a limit. Eventually you get speed by duplicating the entire processor. That said, the increases in performance wrung out of the x86 in the latest i7 incarnation left me impressed. But it really does seem that ideas for making a single core go much faster with new smarts are running out.

I’ve heard some people talking about processors with 1000s of cores. Are they talking about dumbing down the individual cores, or would they indeed mean more transistors in the same space?

The feature size is the metric that is related to Moore’s law. This is usually expressed in nanometres. We see 65nm 40nm, and incipient 24nm processes at the moment. This fixes the number of devices that can be placed on a chip. (The density of devices is also related to the regularity of the system being laid out - very regular geometries like cache memory are denser than arbitrary logic - but the principle is there.) Quite often you see an existing design simply shrunk to a new process. It gets cheaper, because it uses less area - or you can add extra cache to fill up the area - might use less power, and tends to clock faster. (Also the circuits leak more, which makes the gains less useful than before, especially getting the power and heat down.) This is one half of the Intel tick/tock product cycle. The other being a new design. So the answer is that contemporary designs that hope to reach 1000’s of processors will use relatively simple cores. In context, note that a large chip can contain of the order of one billion transistors. You could drop 1000 486 cores on a chip with that density. In reality you wouldn’t do that, but you get the idea.

At the moment, you’re right, but I think there are several technological issues waiting to up the bar yet again. 3D is just now breaking into TVs and is probably not too far from computer devices, especially when we consider how wearing glasses can provide more virtual screen real estate for small mobile devices. And the promise of AI, computer speech and voice recognition remain mostly untapped. So… it’s only a matter of time until a typo in Microsoft Word pops up a 3D-animated paperclip to say “It looks like you’ve made a type here, tell me what you meant to write” and can understand your spoken plain-English response.

(I’ll be one of the first people to turn that “feature” off, but you know it’s coming.)

If anyone is interested in this subject, the IEEE Spectrum has a special report honoring the 50 year anniversary of Moore’s Law:
http://spectrum.ieee.org/static/special-report-50-years-of-moores-law#

I’ve certainly found that my last few purchases were decidedly pedestrian- my 4 year old laptop is still more than powerful enough for anything I actually care to do with it.

Although Windows and Office do seem determined to bloat me into a new one eventually, but even then, I’ll be upgrading because they left me no choice, not because I actually wanted more.