CPU processor speeds

It seems that INTEL is making chips faster and faster each year. I remember the days of $2,000.00 for a 120 mhz cpu. Soon you can buy a 3gz cpu. My question is, will this seemingly exponential rate of growth speed continue or will it slow down soon?

I mean at this current rate, we could have a 10gz+ processor by the time the BeiJing Olympics come around.
Also, what the hell would it be like to use a 10gz comp?

I’m not sure how much faster they can go. I know that the transistor technology in CPU’s today can only get so fast. I’m not sure how fast that is but it’s possibly why you are starting to see several muliprocessor machines. As far as a 10ghz computer, it’s hard to say what it would be like. You have to take into account the rest of the components of the computer. Right now, the hard drive is by far the slowest part of the computer. No matter how fast the processor is, it still has to wait on the hard drive to load the data. Maybe by the time they have 10ghz machines they will have a new kind of storage. By today’s standards however you most likely wouldn’t notice any difference in load times than you can get with a 2ghz or even 1ghz chip right now. However it’s very hard to predict where technology will go…so…who knows??..

There’s a term for this phenomenon: Moore’s Law. The version I’ve heard is that the number of transistors per square inch on a chip will double every 18 months, rather than the two years noted in the CNET article, so maybe the pace has already slowed down.

(I’m not a CPU designer, but my understanding is that by cramming more transistors into a given area, you reduce the distance between them – and thus the time an electrical signal has to take to travel from point to point – thus speeding up the processor.)

Anyway, how much longer Moore’s Law will hold is a subject of much discussion in the tech community, and it’s certainly not a question that I’m prepared to answer. Googling on the term turns up about 63,000 results, if you wanted to dive deeper into the subject.

One thing to note is that the “doubling every 18 months” of Moore’s law is not very smooth. We just went thru an incredibly fast runup in speed the last 5 years. In all probability we will be in for a slowdown in growth the next few years. Part of the slowdown is that certain tech. limits are being reached. But also it will be caused by the fact that processor speed has outpaced users’s needs (for the first time ever). This greatly reduces the financial incentive to go all out for more speed.

The growth rate will pick up again after that, but that requires a highly unpredictable tech. advance. It will come, but who knows the what and when.

Part of the rapid increase in speed was/is due to AMD comming out with a chip that competes head to head with Intel’s (it was faster then Intel’s best for a while).

Before this AMD and others (Cyrex for one) made lower end chips, knid of cheap knock-offs of Intels lower end. Actually they usually took it quite a bit further then Intel and often made the fastest chip of that series (Intel moving on to the next generation processor).

Example - intel’s fastest 486 chip was the 486 66 DX2, AMD’s was the (basically) 486 133 DX4 - but by the time AMD came out with this Intel was well into the pentium game.

When the Athlon came out everythign changes and both companies pushed to come out with the fastest chip. This pussing outstripped the software’s ability to use this processing power. As noted above the harddrive was/is a bottleneck but more so the internet has been a major player too which is several orders of magnitude slower then the slowest harddrive even with a high speed (broadband) connection.

It would be a hard sell to continue at this pace. With the price of ram dropping perhaps a harddrive made of memory chips will be a practical reality soon but then again we are moving more and more towards having everything online which is another bottelneck I don’t see breaking anytime soon.

It will likely slow down somewhat but mostly because of the exponentially increasing manufacturing costs competing with the geometrically increasing chips per wafer will reduce affordability.

Intel already has a roadmap defined to 10GHz with current technology.

Not much different. Software bloat is always ready to suck up your CPU cycles.

However, if you’ve ever tried to do any digital photography editing or, heaven forbid, digital movie editing, you’ll be desperate for the 10GHz CPU!

According to one of my college professors, we would reach physical limits which would stop us from ever getting much past 40 MHz (this was in the day of 1 MHz CPUs). Since then I’ve stopped taking all predictions like that seriously.

Yeah, I want 4 GHz processor,
and 4 Gig Ram
and 1 Gig HZ MB
and a 20,000 RPM 100GigHD
Well, maybe 2 parallel 4 GHz processors
and 2 – 20,000RPM 100Gig HD’s

But until I can have a T-1 pipe or bigger (at $1000 to $2000 / month I can’t do it…) and or the net servers and the sites themselves go faster, what is the point. I end up waiting… :smack:

Oh, heaven forbid I learn to type faster and learn to spell… :smack:

In fact, AMD and Cyrix made CPUs that were radically different in design from Intel’s, and (especially AMD) targeted not the low-end market, but rather the middle of the speed range. The worst of the “cheap knock-offs” were Intel’s own 486SX line, which performed about like the older 386’s.

And Intel stayed in the 486 “game” for a good spell after introducing the first Pentiums. I distinctly recall genuine Intel 486-DX4’s (a misnomer; they were clock-tripled, not quadrupled) that ran at 100 and 120 MHz. The AMD chip you refer to was the 5x86-133, or as it was sometimes called, the 5x86-P75, a reference to AMD’s assertion that it would run with a 75 MHz Pentium. I sold a boatload of those puppies on eBay not so many years ago… :slight_smile:

Intel’s new ‘Prescott’ chips debuting at 3.2 Ghz sometime mid-2003 will use a 0.08 micron die instead of the 0.13 micron used in the current P4 chips and 0.18 micron die used in the initial P4 offerings.

It’s entirely possible that within sometime typical chip cooling solutions will become more expensive than the chip itself !! I believe there are some problems related to electromigration once Intel adopts the 0.08 micron die. Pumping these up to 10 Ghz seems unlikely.

What most probably chip manufacturers will have to improve upon is efficiency of processing instructions rather than raw clock speed. Marketingwise, this means more buzz about features X, Y and Z and a de-emphasis on raw Mhz. Intel is doing the former with HyperThreading on the P4-3.06 and AMD’s been doing the latter with their performance rating measure.

I’m willing to bet that they already have a PC based 10ghz processor.

Think about it, it’s all about the money.

Let’s make a 800mhz, and sell a pc for $2000
Let’s make a 900mhz, and sell a pc for $2000
Let’s make a 1.0ghz, and sell a pc for $2000
Let’s make a 1.2ghz, and sell a pc for $2000
Let’s make a 1.3ghz, and sell a pc for $2000
Let’s make a 1.4ghz, and sell a pc for $2000
Let’s make a 1.5ghz, and sell a pc for $2000
Let’s make a 1.6ghz, and sell a pc for $2000
Let’s make a 1.7ghz, and sell a pc for $2000
Let’s make a 1.8ghz, and sell a pc for $2000
Let’s make a 1.9ghz, and sell a pc for $2000
Let’s make a 2.0ghz, and sell a pc for $2000
The industry would rather sell you a 5ghz chip and make you upgrade. When there’s a faster chip.

Not a chance.

While company A (let’s call them Outel) is saying “Let’s make a 1.8ghz CPU for $2000”, company B (let’s call them “AND”), which wants a bigger slice of the market share, is saying “Let’s make a 2.1 ghz CPU for $1990 and undercut Outel’s target market”. Outel is therefore not going to sit on a 2.4 ghz processor if they have one.

Um, even if they sell a PC for $2000, you can replace a chip without buying a new monitor and case O_o

If they had a 10 GHz processor, they’d also be completely dominating the server market. No way in hell they would skip out in that.

But back to the point… there are always theoretical limits to how fast things go. At this point, however, there aren’t too many uses where a 5.0 GHz processor would be more effective than a 3.0 GHz processor, the sole exceptions being data analysis, servers, possibly gaming, video editing and rendering, etc. For a few reasons. One, most of the population doesn’t use anything that requires more than 500 MHz, much less 3000 MHz, much less 5000 MHz. For two, the rest of the computer is already running so much slower than the processor that you generally have a lot of idle processor time while opening a file from the hard drive or even accessing RAM.

Lastly, a LOT of people are looking at changing the way processors work, from getting rid of the processor entirely, or maybe dividing the processor up into several hyper-specialized sub-processors (nVidia is pushing for this), or tandem multi-processor units handing different tasks (subtle difference), or changing the materials to using light or organics, or moving away from binary entirely and going to a base-4 or something. The concept of a silicon binary CPU could be laughably funny in a decade. More than even all that, a lot of people are looking at so-called “clockless” processors. You can look into research on processor technology for years and still not know everything. Technologies are the biggest thing since sliced bread one day, and never heard from again.

The best thing you could do is follow techie hardcore enthusiast websites like http://www.arstechnica.com/ and http://www.hardocp.com/ that cover the latest advances, theories, news, announcements, roadmaps, white papers, research, tools, prices, companies, etc daily.

my cooler already costs as much as the processor, both are worth $40 right now.

the focus is increasingly shifting from MHZ, or even performance to heat generation. after all you can always put 100 processors in a PC, but if each one draws 100 watt you will go broke with the electricity and air conditioning.

i dont know when, but at some point the progress in the MHZ area with the silicon will have to stop, maybe this will happen at 10 ghz maybe at 100 but it will happen. then we will either switch to another technology such as optics or spintronics, or we will have to increase performance purely by increasing parallelism in processing. this will require decreasing power dissipation above anything else.

perhaps 30 years from now we will not be overclocking, but underclocking our processors in order to cut down on the electricity bill :slight_smile:

30 years from now the term “clock speed of a processor” will be about as relevant as punch cards are today.

I think the only reason processor speeds are still going up so fast is that the 2 big chip manufacturers are STILL managing to dupe even the most computer savvy buyings into vastly overestimating the effects a CPU upgrade can have on performance.

I hear quite often that a “gaming machine” with the top-of-the-line video card MUST have at least a 2.5Ghz chip otherwise it is not doing it justice, better make it 3Ghz instead, just to make sure. However, benchmarking an identical machine with different processors shows roughly a 0.1/60 framerate difference between a 1.6Ghz chip and a 3Ghz chip in Unreal Tournament 2003 at 1600x1200 and something like 0.5/80 difference at 1280x1024. What that means in practice is that for the games where performance actually matters, the difference is imperceptible.

Thats no even counting the fact that 99% of people aren’t hardcore gamers. In reality, there really should be not much interest from the general public in faster processors until another “killer app” comes along.

well shalmanese i have a scanner, epson, it has 2400 X 4800 dpi HARDWARE resolution, and 48 bit color depth, it easily generates 100 megabyte and bigger scans, now when you start to play with them in photoshop … i have a 1.33 ghz athon and it takes a couple seconds to apply an effect.

ye i guess you could blame it on the memory subsystem … i guess it would depend on algorithms used, hard to guess what the cache miss rate is …

Shalmanese, mostly true, but the primary reason people want the big GHz machines isn’t because they boost the benchmark score - it is so they can run mIRC and WinAmp in the background, tab out of UT2K3 at 1600x1200 running 4x FSAA and still keep a decent frame rate. There is also some truth to the fact that a brand spanking new GeForce 4 on a 700 MHz computer being about as useful as a GeForce2 on the same computer.

The only other real consumer uses for the fast processors are, as Vasyachkin mentions, graphics and video editing, data processing, etc, so 90% of the people who bought a top of the line Dell for Christmas bought something way more powerful than they need.

OTOH, the companies know this, and stop selling one speed of chips for every speed they introduce… so if you buy a new computer, the price range means that you might as well get a 2.5 GHz CPU now, even if you don’t need it. Additionally, they upgrade the FSB speed and RAM types, meaning you need a new board, and it is a endless loop.