1 billion times per sec?

How does the processor in a computer flash off and on that fast what is it that does the work and how?

And I’d like that in one sentence please. :slight_smile:

Hey, when I hit a groove, that’s how fast I start flashing too.
I have no clue as to the specifics, but light travels at 185,000 miles per second. It travels around the world once every seventh of a second. One can only imagine just how many times per second an electron can circle around a nucleus. Well, I suppose we don’t have to imagine. We know…it’s just that I don’t know. So I’m not going to tell you. But it’s a lot. Big big number and stuff.

I think it has something to do with electricity.

The processor isn’t flashing. It’s just executing instructions. It does that by electrical current flowing through it in certain places and being manipulated. I really don’t know much more than that.

Enderw24: “One can only imagine just how many times per second an electron can circle around a nucleus.”

1289618013 Billion (Classically speaking). Sorry. I know that’s not helpful, since electrons do not move at the speed of light, and just spiraling around nuclei doesn’t get anything done. But it gives you an idea of how fast stuff happens on the atomic level.

Processors don’t flash on and off. They go through cycles, called clock cycles, executing instructions that flow into it via a pipeline. The clock frequency is controlled by a very high-tech oscilator. As technology increases, they can make faster oscilators and processors which are capable of highter tolerances, timewise.

There is no way in heck I can explain how a CPU works in the space allowed here. Basically, assume you can make certain logical functions out of semiconductor materials, i.e. you can make a logicl NOT by a single transistor (if you put in a signal it conducts, pulling the output down, which is the inverse of the input), you can make an OR with a couple of diodes, etc… adders and multipliers get a bit more complicated, but it’s the same basic idea.

Early CPUs basically just had a few registers (places to store data) an ALU (arithmetic logic unit, the main thinking part, where all of those logic functions like AND, OR, NOT, EXCLUSIVE OR, etc are actually done), and some control logic to make it all work. The control logic was just a state machine, and the processor would go through various states: instruction fetch (where you fetch a number from memory that tells you what to do next), instruction decode (go and get all the values your ALU needs for this instruction), execute (let the ALU do its stuff), and write back (store the answer somewhere).

Modern CPUs do the same thing, except they have specialized bits of hardware to do each part. One bit of hardware fetches the instruction, another executes it, so that while the first instruction is being executed a second one is already being fetched (this is called pipelining, and it makes things go a lot faster).

Just to get an idea of the complexity of a modern processor, a pentium has an instruction prefetch queue, two execution pipelines (it executes two sets of instructions simultaneously), a third pipeline just to handle floating point math instructions, and caches all over the place to help speed things up. The basic concept is the same as on the old 8088, but the details of how it does it have gotten extremely complex over the years.

Registers (little places to hold data) are used to control the flow of stuff through the CPU, and registers work by latching data on clock cycle transitions. The clock pulse goes from low to high, and the data is latched into a register. Then, each stage goes off and does its stuff. There is a thing called “propogation delay” which is basically the time it takes from when you change the inputs to a logic circuit to the time when the outputs are guaranteed to have the right value (transistors don’t switch instantly, so you need to give them all time to do their thing). How fast a CPU can run is determined by its slowest logic stage, so for a 1 GHz cpu, the slowest part has to finish doing what it needs to do in 1 nanosecond. That’s when the next clock pulse comes in.

That’s not a lot of time.

I’m obviously glossing over the details. Patterson and Hennesey (not sure of the spelling) wrote an excellent book on computer architecture if you want to know more about it.

Electricity goes at the speed of light, 300,000 km/sec.

That’s the same as 300 km per 1/1000 second.

That’s the same as 300 meters each millionth of a second.

That’s the same as 30 cm (about one foot) each billionth of a second.

Given how big the typical chip is, and how far the circuits are from each other, a billionth of a second sounds like quite a long time now, doesn’t it?

Achernar, when you say 1289618013 Billion, do you mean 1,289,618,013,000,000,000? That’s just a fantastically big number. What is that, 1.3 quintillion?

Yep. 1.3 × 10[Sup]18[/sup], or 1.3 quintillion. But like I said, don’t expect any 1.3 Billion Gigahertz processors any time soon.

I wonder if this great explanation is being eyeballed word for word this morning by a teacher somewhere in this great nation of ours…Naaah that would be too cynical. :wink:

Not a bad overview. If anyone is interested in something slightly more detailed, without going to Patterson&Hennesey, Cecil’s little helper Karl wrote a brilliant piece a while back:
How do computers work?

Thanks all for the replies,I will have a look at some of the web page/book references given.
And just for clarity,when I say “flash on and off” in essence I meant the sending of ones and zeros that make up the info that is sent;how does it send ones and zeros or + - 5 volts so fast and non randomly.light does not have a choice.

Electricity travels near the speed of light (to discuss that, see this recent thread). The main limiting factors are the electrical loading effects placed on the digital signal by the medium through which it rides. You seem to be aware that TTL data is transmitted through semiconductors & circuit board traces as a 0-5 volt switching square wave (or -5 to zero depending on the design). It is not difficult at all to design oscillators that can produce clock pulses in the GHz range, what is difficult is manufacturing semiconductors that can “cleanly” switch at these speeds.

Example of square waves

The second graph represents a voltage level transition from zero to 5v. In order for a transistor to switch from a 1 to a zero, it must be able to handle a voltage transition from 5v to zero as quickly as possible. The time it takes for a MOSFET to depleat its channel of electrons (thereby switching from ON to OFF or vice versa) can take anywhere from tens to hundreds of nanoseconds. I have no idea what the latest & greatest recovery time is these days for MOSFETs but as these times get shorter, hardware will be able to clock data through at higher & higher speeds.

Another problem, and not as easy to deal with, is the reactive properties of conductors. The tiniest bit of stray capacitance or inductance will distort a square wave by acting as a filter. Too much capacitance supresses high frequencies, causing a nice sharp square wave to look like the “undercompensated” wave in the page I linked to. Too much inductance will create spikes & electrical ringing and result in the “overcompensated” wave. If those distortions get too extreme, the semiconductors won’t know what to make of them. What makes stray reactance so hard to deal with becomes clear when we consider that capacitance exists wherever two conductive materials come close to each other without actually touching. So, for example, there is s certain amount of stray capacitance between the circuit board and the metal chassis of a computer. Now, look at the formula for capacitive (X[sub]c[/sub]) reactance:

X[sub]c[/sub]= (2[sym]p[/sym]fC)[sup]-1[/sup]

Where f is the signal frequency and C is capacitance in Farads. As you can see, as frequency increases, (X[sub]c[/sub]) decreases presenting the signal with a low “resistance” shunt path to ground. Stray capacitance is very difficult to “deisgn-out” of RF circuitry without going 100% optical from end-to-end, but rest assured that as soon as engineers can figure out a cost-effective way to do it, we’ll all be rushing out to buy 100 GHz processors whether we need them or not.

Ah, but according to Moore’s “Law” it should happen in about…oh…45 years.

Moor’s law refers to total computing power, not clock speed. For example, I’ve been told that one of AMD’s 1gHz chips has the equivalent computing power of a 1.2gHz Pentium 4.

NITPICK:off

[QUOTE]
*Originally posted by Attrayant *
**

It is not difficult at all to design oscillators that can produce clock pulses in the GHz range, what is difficult is manufacturing semiconductors that can “cleanly” switch at these speeds.

This is my question!I believe you it is not difficult to design oscillators that can pulse that fast but how?And not just cycling at that speed,to send info intelligently i.e.
00011111 11000110 as opposed to 01010101010101010101010 (on off/up down however you look at it)

The oscilator is not the source of the data (the “intelligence”). It merely produces a regular clock pulse such as 1 0 1 0 1 0 1 0 1 0… Any data received by the semiconductors is shifted through its registers (“processed”) at the speed of the clock.

Imagine a light bulb being powered by the 60Hz power. It flashes at a rate of 120 times per second (ignoring the rate at which the filimant’s glow fades & our persistence of vision for the moment). Now let’s use that light to send morse code to your friend. By flipping the light switch on and off (at a “human” rate much slower than 120 times per second) you are inserting data (intelligence) onto that medium. The light bulb clocks your data into the system at 120 “flashes” per second, and the data moves from you to your friend at the speed of light.

Maybe somebody else can come up with a better analogy, or maybe I still don’t understand your question.

I’ll take a stab at it…

Ever see a metronome? That’s basically what the clock frequency in a CPU does…it measures off cycles. Everything moves ‘forward’ one step on each ‘tick’ of the clock.

Try this…

Think of a simple mechanical adding machine that is adding 1+1 each time it goes. You spin the gear once (i.e. one revolution) and it adds ‘1’ to the final tally. Spin the gear 60 times in one second and you get 60 Hz (cycles per second). Spin the gear 1 billion times per second and you’re at 1GHz, or 1 billion cycles per second (of course I don’t think a mechanical device could be designed that could tolerate such speeds).

The ability to cycle faster and faster is limited by various factors in a chip’s design including (but not limited to) power consumption and heat output (the faster it goes the more power it needs and the hotter it gets).

Notice, this says nothing about how much work is getting done. My simple adding machine has a clock rate equal to a high end CPU but it does nowhere near the same amount of work. To measure that you need to find a stat called Operations per Second (sometimes you see it as FLOPS or Floating Point Operations per Second). Even that doesn’t tell the whole story of a CPU’s power since there are many other design considerations that affect its total measure as a workhorse but I’ll stop here for now.

Of course, that nicely points out a marketing scam being perpetrated on consumers by both Intel and AMD (the two big, main rival CPU manufacturers today). Intel would have you believe that the core frequency is the measure of a CPU’s power. The Pentium IV can manage higher clock frequencies than any AMD CPU on the market. However, as already pointed out, due to various factors an AMD CPU of a slower clock speed can manage to equal a Pentium IV clocked higher. AMD, knowing that consumers won’t educate themselves, is now hiding their clock speeds in upcoming CPU designs and is instead tagging a number to the CPU that conveniently equals the ‘expected’ (and somewhat arbitrary) performance of the CPU as measured against an Intel chip.