This question is specifically about home computers (PC’s and MAC’s).
Computers keep getting outdated so quickly these days. One of the ways they do so, is that the CPU’s keep getting faster, but eventually, won’t they reach a speed where they won’t need to get any faster? I mean, for at what point will a computer be so fast, that things like gaming, Internet connections, and so forth, will run so smoothly, that increasing the CPU speed will make little to no difference?
I think the problem with that idea is that software designers are always finding new ways to push CPUs harder.
Remember that distributed-computing projects like Seti@home are using the CPU time of thousands of computers. There will always be projects that simply need more power. More CPU power simply means that people will be allowed to try things that weren’t possible before.
I suppose if games ever reach their apex and stop being the driving force behind faster home PCs then other applications will step in. Artificial Intelligence perhaps… assuming anyone ever figures out the software side to AI.
“Well, sure, the Frinkiac-7 looks impressive - Don’t touch it! - But I predict that within 100 years computers will be twice as powerful, 10,000 times larger, and so expensive that only the five richest kings in Europe will own them.”
This page gives some good theory on processor speed limitations.
One of my college professors told me computers would hit a limit at about 40 MHz (this was in the days of 1 MHz CPUs). Since then I’ve ignored anyone who says we’ll hit a limit any time soon. The “silicon wall” mentioned in QEDs link sounds suspciously familiar.
For what most people use a computer for (e-mail, internet, office type stuff), a 60 MHz pentium is fast enough. Most of those people have also already tossed their 60 MHz pentium into the trash and have at least a 400 MHz machine by now. The internet connection speed is the limiting factor for most of this type of thing.
For people that like 3-D games, there will likely never be a limit on computer power. The more horsepower you can throw at a rendering algorithm, the fancier you can make the game. Maybe by the time Tomb Raider 17 comes out the rendering will be of the quality of the Final Fantasy movie, done in real time.
If you told a person 20 years ago that they could have the equivalent of a cray supercomputer sitting on their desk, they would have told you that there is no way anyone could need anything faster. Now we almost have that processing power, and there are all kinds of things we can imagine to do if we had more processing power. In 20 more years I think the situation will be similar. We’ll be using computers equivalent to the supercomputers of today, and we’ll be whining that it’s not fast enough and we want the latest upgrade.
But not if you choose to run a certain popular OS.
I don’t know any OS that’s happy running on a 60 MHz platform, except maybe DOS or Windows 3.x. I remember when Windows 95 came out, and I installed it on a 66 MHz IBM. I still don’t know why it didn’t barf, but it sure ran really slow.
Even when they can make photo-realistic games as if you were watching TV there will still be a need for faster machines. The more caculations you can do per second the bigger the problems you can take on.
Computers have been fast enough for basic word processing and web use for years. 300mhz is more than enough for these tasks. Even most photo editing is fast enough on these speed of machine. The only things that are pushing computer power right now in the consumer market are games and video editing.
I remember an article in the computer press a couple months ago stating that businesses have slowed down their rate of replacement on PCs in the last 2-3 years.
The article noted the current economic downturn, but also argued that once you get above 800 MHZ you pretty much can’t tell the difference in business productivity applications. I gotta’ say the business PC I’m writing this on at 933 MHZ is not noticeably slower in productivity apps than the 1.8 GHZ machine on my desk at home… I didn’t get that feeling years ago when comparing a P100 to a P200.
Let’s not forget the bragging rights. If Intel invents some incredibly fast processor tomorrow that costs $30,000, someone will buy it so that they can brag around the water cooler. Since that someone will always be there, the companies will always be working on faster processors.
There are still applications that require faster CPUs; on the desktop front there’s video processing.
For some applications, there will always be demand for faster computers. One of my professors recently had a project that took two weeks on a Beowulf cluster, and that’s far from the biggest project I’ve heard of.
But we’re not talking personal computers, here. Home users aren’t going to be scanning the entire microwave anisotropy database for evidence of nontrivial topologies. For a home user, in fact, I suspect that we’ve already hit the plateau of demand. Sure, there are some fancy games which prefer to run on a 3 GHz machine, but have you ever seen one which required more than 500 MHz? The industry is already running on bragging rights.
c, the speed of light, posts a theoretical limit on the top speed of processors. Since the drift speed of electrons in semiconductors is much lower than that, and quantum effects post a lower limit on how small you can make conventional IC elements, the only thing left to boost processor speed is new material with higher electron drift speed.
Video rendering, 3D animation, and some games. Video editing doesn’t require a lot of CPU power.
c does NOT limit the speed of processors. it just limits the speed of the wires/fiber optics.
you can still rearange the wires and be more clever and shave a billionth of a second off here and there.
Hehe, I love that tag line. Very appropriate. Anyway, though as for this thread, I think that there’ll have to come a time where you get the most realistic graphics you can get, everything in real time, and the only thing that will need to be added, is more memory. But, that’s just my opinion.
Thanks, I’ll check out the link.
There must be a point though, where the clock speed would be so fast that you would have to slice electrons in half to achieve it.
Business applications are not CPU bound even on old hardware.
Windows 95 (retail) runs just fine on a 66mhz 486, but you need about 16 meg of ram. If you have 32 meg of ram, the performance of office 97 is quite acceptable. However, the post you responded to did say a 60 mhz pentium which is quite a bit faster than your 66 mhz DX2.
I don’t know, I mean, say we make it to 10ghz, you think people will be complaining then? Or say we go to 100ghz. For scientists, mathematicians, and so forth, there’s probably no such thing as a final speed, but for home users, I’d have to think that we’d reach a limit eventually.