I do, with audio alone.
why do people say “who needs it, other than video” or some such thing, maybe people like video… if you discount all the reasons that exist, of course you won’t find any reasons.
How fast do you think? It become redundant when it exceeds the speed of thought.
Never. Remember, even in The Matrix they were constantly upgrading their system… and they had all the old obsolete programs leftover from the older systems.
Bah. The best available to J. Random HomeUser who wants to run Windows instead of a Real OS, maybe. But there were plenty of machines better than the 486 even before the 486 was invented. Not all of them would fit in the average house, but SGI and DEC* were making desktop workstations quite a while ago that would blow the 486 out of the water.
In other words, give me a MicroVAX and keep your goddamned Intel crap.
*(Now called Digital and owned by Compaq which is now owned by Hewlett-Packard, I think.)
Eh, can you do 800 million mathematical operations a second?
Anyway, it’s an open secret that Microsoft is `boosting’ the minimum requirments of its software to keep Intel in the green, as long as Intel makes sure its chips keep conforming to old undocumented behaviors Microsoft software depends on. That is one of the biggest movers in the low-end market, anyway.
Well, in fact we do, we just aren’t conscious of it. But I won’t hi jack the thread to explain it
Besides video games, the only thing that spanks my 3-year old 1.4 jiggyhurts CPU is downloaded anime.
In the past couple of months, video codecs have become so strong that I can’t watch the fast-moving ones full screen anymore. : ( My life is a living HELL now.
-k
You may need more RAM Kempis. I don’t think the problem lies solely in your processor, 1.4ghz should be enough for just watching video.
Is’nt there a physical limit to CPU speed?
Poor fellah has a general comprehension problem - must have similies disabled.
That might be true, 512meg is getting obsolete. More likely the 3 year old video card needs more power… an NVidia quadro2 MXR with 32 megs is embarrasing these days.
However, video codecs are smooshing ever-increasing video quality into files that are small enough to download reasonably. The work has to go on the CPU to decode such.
As it relates to the OP… perhaps increasing a CPU more than 4 ghz is diminishing returns if the other chips are not keeping up (I’m thinking of the video card and whatever else is in there. )
-k
And last week I was wondering ‘whatever happened to virtual reality’ and mentioning that computers have become much more powerful since the hype of ten years ago, when someone replied that simulating a starfield surrounding the observer with responsive motion in realtime was bringing their P4 to its knees.
Now, take that UT2K4 game, give it a stereo display at max resolution and stereo sound for each player, update each screen of that display at 75 or 80Hz with less than 10 ms response lag to each player’s head and body movements, show every player’s actions to every other player, add in realistically-located sound and visual effects in response to the all the players’ actions, provide logic to power the NPCs… I think we can eventually come up with a way to soak up that extra processor power.
Gaming computers are now routinely the most powerful semi-mass-market computers. Only video-transcoding comes close, and it doesn’t have the realtime constraints that a gaming environment needs.
The reason it looks like there is no need for faster computers is because either A) the bottlenecks are currently somewhere else, or B) There is no 'killer app yet which requires that much power.
But you can see some thresholds coming which could trigger some ‘killer apps’. One is the ability to render high-quality ray-traced images in real time. When we get to that point, the nature of many things could change. Games would take a new level. New types of media might appear which require that power, such as virtual words where we interact in real time.
Internet speed is still a bottleneck. When a new internet standard starts to take over which increases speed by an order of magnitude or two, we’ll start seeing real-time high definition video streaming, which could ultimately do to the television networks what the web has done to print media. New distribution models for home video and pay per view become possible.
Before you know it, we’ll be pining for that new 900 GHZ computer.
You cannot buy a factory-rated 4.0 gigahertz chip from either AMD or Intel. The fastest chip you can buy right now from Intel is 3.4 GHz. A 4.0 GHz chip is projected for fourth quarter 2004, but there are two things you have to know about that.
One thing you have to know is that the new “E” (codename Prescott) model Pentium 4s have reduced computing power per clock cycle (“longer pipeline” is the term bandied about in the press, but I’m a hobbyist, not a designer, so I’ll skip the explanation). This was necessary in order to insure that the chips can achieve higher clock rates. Hopefully, the higher MHz will eventually compensate for the lower computing power.
I say hopefully because the other thing you have to know is that the new Prescotts are putting out insane amounts of heat. Projections indicate that a 4.0 GHz P4 will require the wattage of two household lightbulbs. Intel may very well have temporarily hit a heat dissipation wall which will be difficult to overcome.
And then you have AMD. God knows what the actual MHz rating of the fastest AMD chips is because they’ve been bullshitting everyone with the names or their chips for years now. For example, 'how bout that AMD Athlon 64 3400+? That’s a 3.4 GHz chip too, right? Hell, no. It runs at 2.2 GHz. But it is far more efficient than the Pentium 4 clock-for-clock, and since there are plenty of bozos who would buy a much slower Pentium over an AMD chip simply because the Pentium has a higher clock rate, AMD has to do something to scrape along.
The point here is that clock rates aren’t a decent measure of system performance any more, and haven’t been for some time.
Already there are games and media out there which are likely to tax chip performance, memory, video cards, and hard drive space–stuff like Doom III, high definition video and DVD audio formats. Moore’s Law ain’t looking so good these days, but hardware is going to have to continue to advance at breakneck speed–maybe that’s not the best choice of words–for the forseeable future.
At the other end of the scale, there are myriads of hypevictims who buy 2+ GHz machines when all they intend on using their computer for is browsing the web, sending and receiving email, and doing word processing.
A ten year old 33 MHz computer, with OS and software of its timeframe (perhaps with a slim-&-trim but more modern browser, though), would serve them just fine. So you could say nearly 99% of the processor speed they are buying is redundant.
AHunter3 is correct. Most people have a laptop/desktop that is far more powerful than they need.
However, what many people (the OP included perhaps) don’t realize is that there is a whole lot of other people where the hardware processing speed is a serious limitation. Specifically a lot of the business and scientific software would derive great benefit from a faster processor.
I have clients who crunch hundreds of millions of customer records and an order of magniture greater number of transactions nightly. Their current processing arrays struggle to get the work done in their overnight window.
Nevermind.
Upon a closer reading of the OP, I see that the question specifically referred to home machines.
Something tangentially related - even [http://news.com.com/2100-1006_3-5172938.html](Intel is no longer pushing the processor speed) as the best measure of performance, partly because their current lineup (especially the mobile processors) is rather confusing, and partly because they aren’t likely to ramp up as quickly in the past, for some of the reasons Sofa King mentioned.
In the last 20 years we’ve gone from ~1 MHz in home computers to ~1GHz. We’re not quite as likely to go to 1 THz (another 1000-fold increase) in the next 20, since at that speed even light only travels 0.3 mm in a clock period. You’ll start having problems running the whole chip at that speed (though chips will get somewhat smaller). You can run portions of the chip at different speeds - see Sofa King’s last link for more on that – but then the term ‘processor speed’ loses meaning.
This doesn’t mean that we won’t see something like a 1000-fold increase in performance[sup]**[/sup] over the next 20 years; we just probably won’t be talking about clock speed to gauge it.
[sup]**[/sup]The perception of performance is of course more complicated than this – the use of a GUI in home operating systems likely made a more marked improvement in performance than many processor speed upgrades.
Sorry, put the wrong part in quotes, that should be -
Probably, but we don’t know what it is. If you try to do a back-of-the-envelope estimation of the limit, you’ll find that we passed it a couple of years ago. Which probably means that the assumptions going into that estimate are incorrect.
As for scientific usage, this past Christmas, I had my computer running continuously for nearly three weeks for a research project I’m working on. I’d certainly welcome advances that would let me run that in three days or hours, and I’m glad that I didn’t need years for it. And this was on a computer marketed for private individuals (a dual 866 MHz Mac, if you’re curious). A few years ago, I would have either avoided running this job, or I would have had to wheel and deal for high-end computer resources to run it on.
Regarding a physical limit to processing speeds, the current semiconductor technology is based on silicon or gallium arsenide, or germanium, or indium phosphide.
There is a newly patented process to create perfectly clear diamonds (Apollo Diamond.com) that can be used as a basis for new processing chips. Theoretically, Diamond based chips would significantly outperform existing chips.