How much faster can processors get?

With chip feature size quickly approaching physical limits, how much faster can processors get?

Thanks for your help,
Rob

Nobody really knows. It’s definitely getting harder and harder to increase clock speeds but there are advances that push the limit further. We’ve moved from silicon to strained silicon and for the last few years, there has been talk about using diamond for its greater resistance to heat. I remember a Wired article from a few years back which mentionned this. It’s towards the end of the article.

But I’m no expert in chip-making so take this with a grain of salt.

Your question implies real-world performance and that is a different question from clock speed. We have already moved into multiple cores for a single processor with dual-cores being common these days. Researchers are reportedly already experimenting with chips with 80 cores or so. That can represent huge amounts of parallel processing and will increase processing speed greatly over the next several years independently of clock speed advances.

There are a number of strategies like this that could be used to increase clock speed at the same rate we have seen in the past so we still don’t have a clue where the end is.

Pardon my ignorance, but can’t we continue to get increases in speed by running multiple processors in parallell? Isn’t that the model we’re heading towards?

As I stated above, multiple cores are essentially multiple processors on the same chip. However, you could run multiple ones of those in parallel as well to get hundreds of processors if you wanted. That is how modern day supercomputers are made with regular off the shelf processors today. The problem is the software. It is usually difficult to get software to break up tasks neatly to take advantage of many processors.

Some problems can be solved much faster in parallel. Some problems cannot.

To give you an idea of how fast computers are now, consider the statistic I posted in another thread: if you have a 2.6 GHz computer and sit about two feet away from your monitor, then your computer goes through four clock cycles (thus performing at least four calculations/operations) in the time it takes light to travel from the monitor to your eyes.

Right. Input-output speed seems to me to be the real limitation. Processors workd faster than the data can be displayed and interpreted now, don’t they?

If we gave out awards for Neat Bit of Ignorance Fought, you would win for that entry. :slight_smile:

OTOH, isn’t a “calculation/operation” for one clock cycle a pretty simple step, like adding two binary (not floating-point) numbers? It takes millions of these simple ops just to display a single screen.

Yeah, pretty much. Still, let’s see you do it that fast. :slight_smile:

Well, yes and no. Certainly that’s the case for ordinary desktop applications - web browsing, e-mail, and so forth.

But for server applications (web hosting, transaction processing, and so forth), it’s still quite easily to overload a single processor. And for graphics and scientific calculations, we’ll pretty much never have a computer that’s fast enough. Not to mention games.

The gains you achieve by parallel processing are not linear. In other words, two processors in parallel don’t do the job twice as fast. There are penalities that occur because the software on one processor has to wait on data or I/O that is linked to the other processor. The more processors you add in parallel, the worse it gets.

I’m sure there is a technical name for this phenomenon, but I don’t know it.

If I remember my communications electronics correctly, the technical limitation to single-processor speed will likely come from the stray capacitances and inductances that exist in all electronic circuits. The formulas for capacitave and inductive reactance tell us that, as the frequency of a signal increases, the effective resistance to that signal also increases, exponentially. (For those who want to argue that reactance applies only to AC circuits, I say that the pulsed DC of a data stream is effectively AC).

I see a new approach of the basic construction may allow the continuing increase of processor density.

Whatever happened to the processors being made of microlenses allowing them to be optical processors? Didn’t I hear about some sort of research being done in the '90s on this?

That depends on the problem you’re working on. I recently did some calculations for a research project, where if I had 5,000 processors, I really could have done the job 5,000 times faster. Basically, I was doing the same calculation 60,000 times, with slightly different parameters for each, and the results of each calculation were independant of all the others. On the other hand, there are also tasks which will run exactly as fast on 5,000 processors as on a single processor, if every step has to wait for the results of the previous step. Fortunately, most of the things folks burn a lot of clock cycles on are, in fact, the sort of thing that parallelizes reasonably well, so for practical purposes, we’ll continue to see speed increases for a while yet.

Watch closely, now. At no time do my arms leave my sleeves…

Optical processors should be magnidutes faster than electrical processors but some technological leaps still need to be made in the areas of non-linear optics. As far as I know optical chips have yet to be made.

Just a side note of interest, ever heard of the “transputer”?

A colleague of mine in the 80’s was enthusiastic about this. The idea sounded good – if you need more computing power, you just plug in some more processors. The trick was in the compiler, which forced you to recompile your program to optimize it for the exact number of units you had available. Pretty neat idea, but it seems to have bit the dust.

Most of the technical names are actually pretty intuitive. The extra processing you have to do to parsel out data and jobs to each processor is parallel overhead, the factor by which you can increase the speed of a job by parallelizing it is the speedup.

Generally, in order to make good parallelized software, you need to write in a language that’s designed for it. Particularly, functional languages are nice because they have no side effects, so you can easily run parts of them in parallel and not have to worry that one part is going to futz with the data that another part needed.

This, and the other entries on parallel processing here, are assuming that a significant amount of computation is the weather simulation variety that supercomputers are for. Most computers today are handling many different threads at once, computers that handle transaction processing are handling thousands or tens of thousands of threads. (Just think of Google’s computing needs. Lots if independent queries, all very simple.) Multicore proecessors like Niagara are designed to handle this kind of workload, and you don’t need to worry about the sort of parallel programming you needed for the Illiac-IV, just to show my age.

We’ve head this sort of parallel programming for a long time. In the old days, when you sent out a print job your computer had to handle it, getting interrupted to send more data to the printer. Today printers come with memories and processors, so all that work is done in parallel. Ditto for the disk, and the CD, and everything else. PCs have lots of processors, even before multiple core ones.