In 1999, I bought a 500 MHz Dell. In 2003, I bought an1.9 GHZ HP notebook. Now, I’m browsing through the new HP notebooks, and even the more expensive models still have ~2.0-2.5 GHz processors- they just have two of them.
What has caused the shift from increasing raw processor speed to multi-processor systems? Have we hit the wall for raw CPU speed? Or are the gains from multi-processing just more cost-efficient? Or perhaps the newer processors are making gains in other areas (architecture, etc.)?
Personally, I think that the biggest factor is that demand is plateauing. Games drive the demand (all of the other CPU-intensive tasks are too niche to support the industry), and there’s really not all that much difference between a game you can play on a 1 GHz machine and one you can play on a 2 GHz machine. About all that’s left is doing more things at once, and for that, multiple processors works just fine.
CPU fabrication is running hard up against the physical limits of the existing materials re structure size, heat dispersion, electrical isolation, etc properties. Although structure sizes are still decreasing (albeit more slowly than in the past) it’s far easier to go parallel than to keep shrinking at this point.
I cannot buy the “demand is plateauing” argument. I am playing Callof Duty 4, and it can do for my 2.4 GHz machine to keep up. It is a very demanding game.
On the other hand, as people migrate to game platforms (X-Box and all that) for games, the demand for ever-faster computers ought to slacken.
There just doesn’t seem to be a broad consumer market for huge CPU speeds at the cost of convenience. I guess you could say that certain physical limitations play a role in this. Mobility, powersaving and heat dispersion are what companies want to focus on right now, and multiple processors just make more sense from an efficiency stance.
My PC has quadcore, and, to be perfectly honest, 75% of it’s completely wasted. It would have been a better investment to get a better video card. Modern laptops and tablets can’t go too high with their components due to battery technology still being stuck in the 60s and size considerations.
fringe effects are the cause of speed growth slowing.
The main parameter of a semiconductor process is feature width. This is basically the width of the smallest line that you can make. We will call this L. This goes down year by year. However this decrease is slowing as it becomes more and more expensive to get to the next process shrink.
Speed up for processors comes mostly from making the transistors smaller.
At a first approximation the current drive of a transistor is proportional to its width. This current has to be used to charge an discharge capacitors.
drive=d*L
d is some constant that does not change all that much with changing feature size.
The capacitors that are charged and discharged are associated with the transistors and the interconnect wiring. To a first approximation this is proportional to the area of things.
capacitance=fLL
f is some constant that does not change all that much with changing feature size
time per clock cycle is proportional to capacitance divided by drive. So fiddling around with the above.
T=g*cpacitance/drive
g is some constant that does not change all that much with changing feature size
Substituting in above equations
T=gfLL/(dL)
T=gfL/d
frequence is 1/T
frequence = L*(d/(g*f))
So frequence goes up as L goes down.
The issue with fringe effects is that capacitance is not really fLL there is also a frigne term that just has to do with the length of the edges. This term is now becoming more dominant as the are shrinks.
capacitance = fLL+eL
T=g(fLL +eL)/(dL)
T=gfL/d+g*e/d
So nomatter how much smaller you make the transistors the switching time cannot get much lower than ge/d. We are starting to get to the point where ge/d is noticable so shrinking the area does not help as much as it did before.
Gaming doesn’t make up a large enough percentage of computer sales to make up the demand necessary to overcome the physical restrictions they are hitting now.
Companies want computers that can better handle multiple programs and switch between them readily. Multiprocessing is what they want.
I think astro has it right. I looked at Moore’s Law and some other related stuff, but I couldn’t find what I was looking for. Hopefully someone involved in that aspect of the industry can enlighten us all.
Faster chips will get hotter, especially in laptops this is problematic since their ability to cool themselves is VERY limited. Two chips running at lower speeds will run cooler than 1 chip at twice the speed. There are dopers better equipped than I to discuss CPU architecture but you are on the right track.
Desktop machines are very quickly shifting to higher powered dual cores and quadcores as the standard off the shelf PC.
So in some ways it is contining to grow but by paralell processing rather than raw single chip speed. In many ways as software grows more complex and more needs to be running on a given box at one time, this is actually a good thing. Separate cores can dedicate more time to a single task without sharing allowing you to do 4 things on a quadcore 2.4 just as fast as your old single core did 1.
A single processor which is twice as fast will in general be better for running multiple programs than two processors. Say you are running three programs that each take one hour to run on one of the slow processors and 1/2 an hour to run on the fast processor. A dual core system will take 2 hours to run the three programs. It is very difficult to write programs such that part way through it can be switched to the other processor. A single processor will take 1 1/2 hours to run the three programs.
Anyway, there’s more than one way to make individual processor cores go faster: better pipelining, speculative execution, and reducing the clock cycles an instruction takes come to mind. A 2.4 GHz core today is faster than a 2.4 GHz core of a few years ago.
As for plateauing, the mass-market industry is embarking on a transition from 32 bit to 64 bit and once that gets going, you ain’t seen nothing yet!
Sure, there are games that require that, but how much difference is there between Call of Duty 4 and Call of Duty 3? I’m sure that 4 has better graphics, more detailed physics, etc. than 3, but is it enough of a difference to justify the added cost for most gamers?
Quoth gazpacho:
From what I’ve seen of my dual-processor Mac’s performance stats, it does it all the time, and with programs that were written with no regard whatsoever for multiple processors: The OS just handles all of the switching. Maybe this is an architecture difference between Macs and PCs, or something.
What’s been alluded to in this thread but not stated outright is that the major constraint on CPU design is what the final product will cost. After all, it doesn’t matter that you’ve got the fastest, most energy-efficient chip out there if no one can afford to buy it. Up until about 2006, it was very cost efficient to make faster chips, and that’s what was done. Now making a single chip faster is becoming much more expensive, and as a result there’s not as much progress on it. However, it is fairly cheap to get extra speed by putting a bunch of cores together, and so that’s the direction that architectures are heading.
The OS can automatically parallelize different processes, and sometimes multiple threads within the same process. But if a program was written to be single-threaded, it’s going to be run single-threaded, and that’s that.
The 64-bit change won’t be anything compared to the effects of widespread parallelism. I don’t know how much it matters for the end user, but scientific computing is currently undergoing a revolution due to parallelization, and that’s just getting started.
But a dual core will enable your computer to remain responsive while a compute-intensive process chugs along in the background. This works better than pre-emptive multi-tasking on a single core.
I don’t buy it either. No one in this thread has mentioned video processing. It’s really frustrating to wait hours for transcoding. I think CPUs could increase in speed by an order of magnitude and it would not even scratch the surface of what we need for video.
Another factor making raw CPU speed less directly relevant to current performance is that over the last few years most of the graphics processing heavy lifting has been offloaded to video cards with very poweful GPU processor subsystems that are virtual PCs onto themselves, often with more raw processing power than the motherboard CPU. Once graphics was offloaded the overall processing demands on CPUs eased considerably.
As a result, we’re seeing a lot of non-graphics computation being given to the GPU, to the point where AMD is considering building a high-end GPU into their CPUs.
I’m not sure how to reconcile this to my experience… I have a program which I wrote which, on a typical run, can take days to complete. It might have some small amount of multithreading added by the compiler, but there’s definitely none in the source code, since I know I didn’t put any in there. When I run a single copy of it on my dual-processor machine, and check top, it takes up approximately 100% of one processor, with the other one running various dribs and drabs of other programs and staying mostly idle. But when I run three copies of it simultaneously, each copy uses about 65%. All three finish at close to the same time, and the time taken is consistent with assuming full utilization of both processors. The only explanation I can think of is that at any given moment, each processor is running one copy with one being left out, and that the processors hand off the programs on a timescale shorter than top averages over so that each of the three processes spends about the same amount of time waiting.
Keep in mind that some of the increase in CPU speed a few years ago came from chip makers producing chips that came out-of-the-box in ~overclocking mode. Hence, the necessity of big heatsinks and fans for everyone. This disguised the falloff in true performance gains for a while.