How much faster can processors get?

In fact these strategies work best when you turn back the clock speed. Architectural features, like bigger caches, more and deeper pipelines, and branch prediction units, have improved performance at least as much as clock speed.

We’ve got a few generations before we have to worry about physical constraints. We’re doing 65 nanometer now, and feature sizes quite a bit smaller than that are in the lab. New generations aren’t coming out quite as fast as old ones, but that is purely an economic, not technological issue. Mask costs are gigantic and accelerating. To make a new ASIC, say, at those feature sizes, you need to have enough functionality to cram into it, and enough volume to make up for the mask costs. That’s getting harder and harder.

I think the main reason we’re seeing dual core architectures now is design complexity. What do you do with all those transistors? You really can’t afford to design in new features to make use of them, and no one knows what new features would be worth it. We used to increase cache size, an easy way of using transistors, but that runs into diminishing returns. Stamping out multiple cores is a cheap way of using up the transistors, and allows you to simplify the cores and provide better performance without increasing the clock speed much.

I have a hard time seeing how an 80 core processor would do much for a home PC. Graphics processors are inherently parallel now. For transaction processing, maybe.

That’s a pretty naive way of doing parallel computing. A better way, if you have lots of parallel tasks, is dynamic allocation. You have a queue of jobs to submit, and a list of processors. You ship the jobs out to processors, and when one finishes you send it another. This is really useful when the tasks are not balanced, so one might take much longer than the others. It is also useful in gracefully handling a processor that goes down - all you do is to send the task to another one.

It’s not great for parallel processing in the small, like vector processing, but there are a lot of jobs it works well on, and it is in use today. There are lots of details on how to optimize this, but it works fairly well.

Well, modern processors use very low voltages, and signal lengths are very short. Long wires have things like repeaters on them. Heat is a problem, thanks to information theory, but I haven’t heard much about capacitance issues more than what is normally expected. At these feature sizes everything is a problem, but the technology guys seem to fix them.

At this level you need to pay attention to design rules, so you don’t design something that can’t be manufactured. There is a whole new set of tools in this space.

Actually, adding two numbers takes a bunch of clocks. You’ve got the instruction fetch and decode, the operand fetch (very long if they have to come from memory) the actual add, (one clock, but complicated things like multiplies take several) and then the store. The speed comes from pipelining independent operations. If you have

Add A,B -> C
Add D,E -> F

you fetch the first instruction in the first clock cycle.
In the second you fetch the second instruction, and get A and B from registers
In the third you do the Add of A and B, and fetch D and E
in the fourth you store the result into C, and add D and E
In the fifth you store F.

So, while one add takes four cycles, you can do two in five.

Now, if you’ve got

Add A,B -> C
Add C,D -> E

You need to wait for the first to complete before you start the second. A lot of transistors get used up looking for another instruction you can jam in between these two, to keep the pipe full.

The input/output speed you’re thinking of isn’t the real issue. Even my old PDP 11/20 was faster than the TTY we had it hooked up to. We use up spare cycles with fancy graphics these days. Since the processor isn’t waiting for the I/O devices, and I/O is never a bottleneck, this isn’t really an issue.

The real problem is the I/O at the chip boundaries. The busses on a processor are much slower than the internal processor speed. The busses haven’t gotten faster at the same rate as the processor has.

We used to try to solve this problem by lots of I/O - by one port per bit of a word, say. However, synchronization of all those bits, and the problem of crosstalk, made that a dead end. Today you serialize a bus and send it out on a single, very fast, wire. (Actually two, of course, of opposite polarity.) This is called SerDes (Serialize, Deserialize) and is very fast, and is getting used more and more.

80 core processors right now? Probably not on a regular home Desktop for the average user. I could probably use multiple processors, and that will only get better when more software utilizes it. Why ten? Well, running a DVD burning program pretty much eats up my processing power. If I had 10 processors, I could assign a good number of cores to maximize the speed of it, dedicate a couple cores to running a Virtual Machine, a core for a personal web server on my PC, One for burning my CD at the same time I am burning my DVD, watch a movie on TV with my ATI Wonder Pro, and surf the internet while running an anti-spy ware, an anti-virus and a software firewall program like zone-alarm. (Hell, my Net-Tech teach last semester ran 8 different anti-program in the background at one time, but he was excessively paranoid)

My limitations on what I can do on my computer are hindered because I only have one processor, and as far as I know, you can’t dedicated specific tasks to a certain core or dedicate a core for particular processes. This will come in later years with more advanced operating systems.

80 core processor? If I had a home server set up with a bunch of Thin Clients with aforementioned processes occurring for each system, I could definitely use 80 processors. Sure, it is like having a riding lawnmower equipped with a V8, more than likely you aren’t’ going to use the full potential, but when you have a hilly yard, it would be nice.

We’re talking demo here, not real life. :slight_smile: How many people are actually going to be burning a DVD and CD at the same time for a significant part of the day? I don’t notice much performance degradation when I burn CDs. I’d expect a lot of that would be disk bound. How much of DVD compression is done in the DVD itself, on the DSP in there? I’d assume a lot of the compression technology is proprietary, and not a part of Windows or in the DVD burning utility, but I’m not an expert.

If you’ve got a web server handling any sort of traffic you’re in the transaction processing space I mentioned. Unless you’ve sent your IP address to every spammer in the universe, I can’t see spyware programs or antivirus programs running in background taking a lot of cycles.

Actually there is one way of using 80 processors I can think of. If new games spin off AIs to different processors, you might be able to use them up, if you have really smart AIs. AIs that smart would probably make any game unwinnable. :slight_smile:

I was able to do that on dual processor workstations ten years ago. (Unning UNIX, of course.) For the most part I would think you would want to assign processes based on processor activity.

Servers, no problem. The server for the thin client I’m using now probably has 64 dual core processors. I shudder at the thought of the average PC user trying to maintain a server, though.

Well, if you were ‘allegedly’ decrypting DVDs or CDs (not that I would do so… allegedly :wink: ), that causes my system a bit of degredation. Of course I only have a 1 gig and a 512 for RAM, so that could be a part of it.

I do see what you are saying about the average user, though I think that server maintenence will get easier and easier and companies will start offering simple alternatives to Servers that are basically the same thing, just easier to maintain (i.e Apple and Windows new home servers).

Personally I’m not sure the Desktop will last much longer in the average persons home, with mobile computing becoming much better. Could be awhile, but it could effect Desktop PC sales (though not really pertinant to the OP, as processors in general will still probably get faster and have more cores).

I suggest reading Feynman’s entire essay: "Plenty of Room at the Bottom". In fact, read all of his books/talks/essays =)

I’m with you on the coming death of the desktop. I can see two models for mobile computing.

  1. Today’s laptops, with more disk and better networking. You carry all your stuff along with you, wherever you go.
  2. Thin clients - a type of utility computing. With a really small think client, with some disk, wherever you go you plug into a wireless network, get your session, and you are off and running. I do this at work, in a sense. We have lots of think clients all over our campuses, and our ID badge is a smart card. If I go from my desk to a conference room, I plug in my card and I have my session right there. Someone moved from California to Japan, got to his new office, plugged in his card, and started right up. Combine the think client with an eBook reader, and you’ve really got something. We’re not wireless at work, but be could be.

I don’t think that desktops are going anywhere. The reason is that a desktop can be much more than a regular computer. It can be a sound system, gaming system, and a big honking monitor among other things and those take up space. There isn’t any way around that especially when part of the setup is just a cushy desk specifically for work and play. I can’t stand laptops despite being on a computer for work and pleasure for 12+ hours a day.

The thin client has some merit although it is a big mystery where it will go. “Dumb computers” without hard drives or much else that connected to corporate networks cheaply and with centralized control were hot shit in the mid-1990’s but they crashed and burned in spectacular fashion. People often forget that it matters little what anyone invents or even what technology businesses push. It is how people flock to it and make it their own. That is why everyone loves the web and almost no one wants a video phone despite early effort and predictions.

I use a Citrix connection for an IT application suite when I am at work. Even at work, it is remote so I can come home and log into the same thing with no difference. That is very attractive and handy and I would guess that companies would offer remote application and storage for subscriptions for individuals that would be remotely managed and accessible from anywhere. I know you can come up with some similar things that are available right now but that is not what I am talking about. There were MP3 players before the IPod but it took a mature technology and popular design to make it ubiquitous. I imagine that 5 years from now, someone will offer packages and individual applications that are seamless to use from anywhere along with being virus and hardware failure proof.

OTOH, experience has shown that people will not give up their own hard drives for their cherished files of any type. That leaves desktops and laptops essentially intact but combined with a new model.

What you are saying is that the time the processor is waiting for the I/O is occupied by a bunch of stuff that has nothing to do with solving the problem conveying the information.

Depends on if there is a data dependency. If you’re waiting for input, you aren’t going to get very far until it arrives. For output, though, everything is buffered. If I dump output to my terminal, say, chances are the computation is going to be finished before I get to the end of the output. Of course if the next step after dumping the data is input, then you got to wait. You usually don’t even have to wait for memory writes to get done anymore, but you can cram in the next instructions between the write and the acknowledge. That’s nothing new - when I wrote microcode you did that by hand.

A desktop is not a monitor. I can see people using big monitors with their laptops or thin clients. I’m typing on a think client with a big LCD screen now. Ditto sound system. I’ll agree that those gamers needing video boards and processing power not easily achievable on a laptop will still buy them. But as more and more people are dependent on their environment when they are not at home, the number of desktops will continue to decrease. I believe laptop sales have already outpaced desktop sales. I also like a big keyboard and big screen, but there is nothing like being connected when I travel.

The two things that will determine if think clients get market share (I don’t think they’ll ever replace desktops) is if people trust their data with a third party, and if net access is so ubiquitous that you can hook up from practically anywhere. That is going to take a while to happen. The big advantage is for the mass of people who don’t want to deal with all the administrative tasks that you need to do on a PC these days. I’d get one for my father-in-law in a second, so I wouldn’t have to be on the phone with him week after week figuring out what setting he screwed up this time. When I upgraded thin clients, there was no impact on me. None. The impact of a thin client going down is lower than a cell phone breaking. But it will be a while before the infrastructure is in place to support them, I agree.

I bought my first computer in the early 80’s.
I have a vague idea the CPU speed was 16 herz.
Does that sound right?

I can remember having debates like this:

“How could anyone possibly need more than 64K of RAM?”

“Why would you ever need a hard drive bigger than 5 megabytes?”

“Who could possibly use more than an 8 mHz computer, other than big corporations, the government, or the military?”

Citing today’s applications as examples of why we don’t need more processor power is completely beside the point. We don’t need more processing power for today’s applications, because today’s applications were designed for the processing power we have. If computers were 1000X faster, you can bet that we would have new applications to take advantage of it - applications we can’t even imagine today, just as 20 years ago people couldn’t imagine that we’d use computers to haul thousands of songs around in our pockets.

And anyway, there are already lots of applications that are limited by current computer speeds. Have you ever tried to edit a 2 hour home video on your computer in DVD resolution? It’s bleemin’ slow, even with the fastest of today’s machines.

Here’s an example of an application we don’t have today, but could certainly have with faster computers - real-time, photorealistic ray traced animation. We’re almost there now. Heavily parallel processors would be great for modeling the real world in games - if I can assign a separate processor to each character in my game, I can write software that will let those people be a lot smarter and behave in more complex ways. And I can assign processors to just handle simulated physics, and really step up the complexity of what’s happening in my simulated world.

Here’s another example: One of the drawbacks to video telephony is that people don’t like to be seen in person when they aren’t dressed well, or if they don’t have their makeup on, or if other people in the room don’t want to be seen. Plus there are bandwidth concerns for shipping high res full motion video in real time. Now imagine super powerful processors that can use a camera to read your facial details, store it as a mesh, ship only the mesh to the other side, and then artificially generate ‘you’ in the other computer by taking the mesh, skinning it with a favorite photograph of yourself, and having it be animated in a photorealistic way that’s hard to tell from real life. Now you’re always looking your best. You don’t age. You can locate your avatar in a location of your choosing.

I’ve already seen this in research labs. A woman in a bathrobe with her hair up is sitting at a desk talking, and what people see on the other end of the line is the same woman, only now she’s in a business suit talking from behind a conference table. Currently, this takes mucho processing power that desktop computers can’t handle. When they can, you’ll see things like this - and more.

When that happens, we’ll be giggling over the old days when we used to think there was no need for computers faster than 5 GHz. But then we’ll be saying, "Who could possibly need more than 2 THz? And so it goes.

I realized after I posted that my point of view was more relevant to real-time processing. I tend to forget about the other 95% of the computing world.

One example I remember had 8 processors in parallel (15 years ago, it was a big thing) for a real-time simulation. There were a lot of data dependencies that kept the gain way below 8x. Also, system as a whole it was very sensitive to the assigment of tasks to the various processor. The people working on it did a lot a trial and error to find the right mix and we left it alone after that.

Another system I’m more familiar with is a real-time control system with dual processors. The two pieces of software on the two processors are so tightly coupled that the advantage is less than 1.5.

However, for non-real-time number crunching, there are more opportunities to take advantage of parallel data paths. But, the software has to be tailored to take advantage of it. These are things I thought about when AMD and Intel started the marketing hype on the dual-cores.

yup, that’s about right. And it probably had a button to bring it down to 4 so it would run older apps.

Only if you used to routinely race people using an abacus - and lose.

The first home computers had 2 to 4 MHz processors. The first IBM PC had a 4.77 MHz, 8-bit processor, and 640K of RAM. The PC/AT, the ‘high performance’ later version, had a 16 bit 80286 processor running at 8MHz, and a 20MB hard drive. At the time, that seemed positively huge. Now you’d be lucky to fit four MP3s on it. If the processor was fast enough to play them, which it wasn’t.

It would have been 16 MHz, or 16 million herz. A 16 herz 286 would imply that the computer would take at least 0.0625 seconds just to add 2 numbers, and the real number would probably be closer to 0.2 seconds.