Why do CPUs generate so much heat?

And do they generate more heat while “busy” than “idle”?

Computers run on electricity. As the current runs through all the transistors and wiring, the internal resistance (measured in ohms) causes these components to heat up. The tiny transistors in the cpu are only able to withstand a certain amount of heat before they begin to fry, so computers usually have a heatsink to draw the heat away, and a fan mounted on top to transfer it to the air and blow it out the exhaust port.

In theory, a processor should consume no power when idle. Of course, in reality they do, but it is only a small percentage of the full load power usage. Modern CPUs shut down portions of the processor that are not in use, and modern chipsets can actually disconnect the FSB when the CPU is not needed.

Depends on your definition of ‘idle’ I think. If you got up from your computer and left, with the OS still on, it’s technically ‘idle’ but it’s still doing processing. There’s a lot of background stuff going on you don’t really realize. Even when there’s nothing going on in the OS itself, the processor is still going through it’s “Fetch, Decode, Execute” cycle.

An ideal switching transistor would consume no power. When it is turned ON, the voltage on the device would be zero so there would be no power used by the device. When it is turned OFF the current would be zero and again no power used by the device.

Transistors are not ideal in that when they are ON there is a small residual collector to emitter voltage and since there is current present there is a small amount of power used by the device. When they are OFF there is essentially no power used because there is essentially no current. This assumes that switching from ON to OFF and vice versa is instantaneous, i. e. zero rise and fall times.

However, the rise and fall times for voltage and current are not really zero, but rather take some definite amount of time. Let’s examine what happens when the transitor is turned ON. The collector current starts to rise and the collector-emitter voltage starts to fall. But the both do this at a definite rate and so during the transition from full OFF to full ON both voltage and current are present in the collector-emitter circuit. This results in a pulse of power used by the device during the transition time. The same thing happens when the transistor goes from ON to OFF. The power pulse is a function only of the rise and fall times of the device. Now, it you switch at, say, 100 megaHz, you get 200 million of these little power pulses per second. However if you switch at 1000 megaHz (1 gigaHz) you have 10 times that many power pulses in one second and the energy that has to be dissipated by the device goes up by a factor of 10.

And so, the faster your central processor is clocked, the more energy has to be dissipated and thus the need for heat sinks and fans on high speed processors.

Great information, thanks. I remember my first laptop didn’t even have a fan (if it did, I sure never heard it running). Now my current laptop has a CPU fan that runs every minute for a couple seconds (as long as I keep the input vent clean, otherwise it overheats and shuts down). With the processor speed increasing every few months, how are they going to cool the “next generation” processors? I don’t think the fan I have could take much more abuse. :slight_smile:

The Intel BTX case standard rearranges the inside of the computer for better case and videocard cooling. Heat-pipe coolers also allow heat from the CPU to be dissipated over a larger area. There is also a resurgence of watercooling as a mainstream cooling technology, with some laptops already sporting a liquid cooling system.

Damn, I lost a post last night when the boards went down.

Well, you’re analyzing a processor from an EE point of view, which is oversimplifying the situation quite a bit. A processor has a lot of transistors and semi-conductor parts, yes, but altogether that is not what it is. I’m assuming when you say ‘idle’, you mean the computer is still on, in an OS of sorts, and just sitting there. Even while this is going on, the computer is still fetching instructions from memory, decoding them, and executing them. It’s possible these instructions are no-op instructions, and do absolutely nothing, but they’re still being run, and still use power. The processor goes through this “Fetch, decode, execute” cycle continuously so long as it is turned on. Only when a processor is shut down altogether, is it not consuming power.

Actually, modern operating systems turn off the CPU when there’s nothing to do, by issuing a HLT instruction instead of running a no-op loop. HLT stops the CPU’s clock until an interrupt occurs.

Of course, interrupts are still happening thousands of times per second, and the hard drive and fans are still moving, so it doesn’t save quite as much power as turning off the computer.

<hijack>
It depends on whether the computer is idling or hibernating isn’t it? Well…, maybe not. I don’t know so much about OSes, I just happen to be an EE student who likes learning CS quite a bit. So I’m gonna try and illustrate how I think it works. The processor kills itself, and just sits there, no instructions nothing. Now I guess the OS would notify a particular something on the mobo that sends an interrupt to the processor that tells it to get up off it’s ass and start cracking, right? Is that even close? Just curious, sorry for getting so off topic!
</hijack>

How I believe this is done is by shrinking the size of the transistors. Not too long ago, 0.18 micron technology was used to build CPUs, now it’s 0.13 micron. Smaller transistors use less voltage, switch faster, and also generate less heat. So with new CPUs, you are getting more spead, for less energy, causing less heat. Not bad eh? There are of course problems the smaller you get The most obvious is manufacturing tolerances and consistency. Not much room for error with 0.13 microns. Also, I think interferance between components starts becoming quite a problem and if you get small enough, the “laws” generally applied to electronics no longer apply (just like when you get to near the speed of light, the ‘normal’ rules of physics no longer apply).

In order to use ordinary lumped-constant (i. e. resistors, capacitors, transistors, etc.) circuit theory the circuit elements have to be much larger than the wavelength of the signal.

Ignore my previous post. I’m not sure what I was thinkint of but it doesn’t make sense, even to me. In actual fact, the elements have to be small compared to a wavelength. It must be late.

When a gate (at least 2 transistors in CMOS logic) switches from outputting a 1 to 0 zero some amount of charge moves from high voltage to low voltage. This dissipates heat. Also when it switches from 0 to 1. This is where most of the heat comes from in a running micro processor.

When device processes shrink in size the amount of charge for an individual signal is stored goes down lowering the amount of heat generated by an individual gate. Also along with the process shrink the voltage is usually lowered which also lowers the amount of heat generated.

If you really want to lower the power of a processor in idle you try and minimize the amount of switching. In idle you don’t really need to have much switching. So chip designers can realize large power savings if they are have the time to work at it.

The problem of increasing CPU power usage exists because die shrinks cannot keep pace with increasing transistor densities. For example, the Intel P4-HT 3Ghz (800Mhz FSB) processor dissipates 104W of heat under maximum load. The Intel Prescott 3.6Ghz processor, which has been die shrunk from .13micron to .09micron, dissipates about 130W of heat. This necessitates new power supply, motherboard, and case designs, which the BTX spec will accomodate. Eventually, air cooling simply won’t suffice.

If you draw a simple regression curve of Energy output/m^2, as of right now, I believe that processors are putting out as much heat as a hotplate. Continuing the trend predicts that the enrgy output will reach the surface of the sun by 2030 and the interior by 2050 IIRC.

Obviously, this can not continue for much longer so there had better me some very amazing breakthroughs in Processor or Cooling technology.

There are actually two main sources of active power consumption in CMOS circuits: switching currents and drive currents.

CMOS output circuits dissipate quite a bit of power via their internal resistance as they drive into the input capacitance of other CMOS gates. CMOS gates are primarily ac loads, rather than dc loads. As with switching currents, the power dissipatation will depend upon how often the node is changing between the 0 and 1 states.

What do you mean by switching currents WereOtter? The current when the NMOS and PMOS transistors are both not fully off?

On another slight hijack, does anyone think that heating issues might cause processor manufacturers to finally break away from the binary processors that we’ve been using? It seems that at one point it made sense to use binary processors since it made it a lot less likely for errors to occur. However, it would not be difficult for us now to make a base 4 processor that would be more efficient and have more potential than it’s binary counterpart.

Some pretty good answers so far. Just adding my two bits.

The shrinking feature size does reduce power usage, but then the CPU makers can fit more transistors on a chip, which increases power usage. The two curves don’t always cancel out.

In the 386-early 486 days, the total power consumption was going down. Now it is increasing quite fast. One major increase in power consumption (besides clock speed) is the the on-chip caches are growing. These memory circuits need to be refreshed periodically even if the data is not being used at the moment. The refreshing consumes power.

A third consideration is that the extreme overclocking concept has gone from a hobbyist’s passion to CPU maker’s SOP. The current chips come already overclocked. (So for a hobbyist to add more overclocking takes even more cooling than in the early Pentium days.) You could sell a CPU that needed only passive cooling, it just wouldn’t run at 2GHz.

Since Average Consumer thinks they need a 3GHz computer to read their email, they end up paying too much for a system that eats a lot of juice. No one ever went broke …