Intel Shows Off 80 Core CPU!!!!!

Sadly, it’ll be at least 4 years before they hit the market.

Can you imagine the quality of graphics that monster’ll have? You won’t be able to tell if it’s real or fake, and it’ll be generating the images on the fly!

Is this the beginning of the end, we can’t make processors any faster so we’ll just throw eighty of them on a chip? It’s a bit disappointing to me, I had this idea of computing speed continuing to grow until we had computers like on Star Trek the Next Generation or something. I didn’t think it would end when I was this young, I thought things would improve indefinitely. Are there any technologies on the horizon that will make for faster individual cores?

The laws of physics will come into play at some point; you can’t just keep on making them run faster. Anyway, I think you’re missing the point by a massive margin; if we can develop scalable processing, then the power (and, in practical terms, the usable speed) will only be limited by budget and physical space.

In any case, I think you’ll find that the computers in STTNG supposedly use distributed processing, or something like it.

Isn’t this going to completely change how software is written and designed?

Yes, but that has already begun, and most of it will probably affect the way operating systems work. For John Q Application Programmer, there may not be all that much change to application design, just that something rather different happens when the application is compiled.

ya, I recently saw a demo on a HPC machine with 4 nodes that was build out of compenents for $2k as part of a contest. The thing ran an excell spreadsheet with a 50 callable bond portfolio using a Monte Carlo Simulation program written by Cornell University. 40 seconds to crunch all the bonds. When I worked at Swiss Bank 10 years ago, it would take 40 seconds to crunch 1 bond - and they spent a buttload of money to have that power.

I can’t imagine what this chip will be like when you really wanna spreadsheet jockey

Computer, I’d like to play Solitaire.

Just that?

Yes.

I won’t enjoy it.

I’m not asking you to enjoy it, just do it, will you?

Alright. I’ll do it. Here I am, brain the size of a planet and you ask me to play Solitaire. Call that job satisfaction? 'Cos I don’t.

Careful. You may get it to want to put it’s head into a bucket of water and if you do that you will fry the whole thing.
:smiley:

Not necessarily. A lot of tasks don’t parallelize well, so more cores doesn’t make any difference for them.

True, but some do. Having two has made a fair difference to what I do, and the bloke up the corridor is testing the software on seven cores and people are itching for it.

Sure, plenty of technologies on the horizon that will make for faster individual processors. You have optical processors for instance, and there was just a major breakthrough on that front from Intel… they managed to inscribe a laser into a silicon chip. Optical computing helps us get around some problems we have in scaling wires down any smaller without losing speed. Then there’s quantum computing which has unbelievable potential but is still in it’s infancy. The reason you’re seeing multicore processors is economic, we’re wringing those last bits of efficiency and speed from our existing chip manufacturing processes which is alot cheaper than adopting and promoting entirely new and untried technologies.

Anyone care to share some examples of what sort of applications will benefit from parrellization (is that a word?) and which won’t.
I am guessing, as mentioned above big number crunchers will do well, but my abilities to make yet another powerpoint presentation will not be impacted much? I am a bit out of what’s what in computing technology these days.
Cheers

Graphics rendering
Computer modelling/simulation
Analysis and processing of large data sets (like SETI data, or the results of clinical trials)
Artificial intelligence.

Well, I would bet that with an 80 core processor the days of needing a seperate graphics and sound card would be gone. You could just devote a few dozen of the processors to crunching polygons and dolby surround. It would be faster too, which means more realistic graphics and higher refresh rates. I just saw an article that said Sharp had invented a way to produce an LCD screen with three viewable angles, you could run 3 desktops and 3 different displays with multiple programs from one chip and screen (not that you’d want to… but you could heh)

It’s true, though, that Powerpoint won’t be greatly affected.

If you’re familiar with the P vs. NP conjecture, there’s an analogous situation with P and NC, which is the class of problems that have efficient parallel solutions. It’s known that every problem in NC is in P, but whether NC = P is an open question. See here for more details.

Games, in part, won’t, as least as far as pure graphics are concerned, since that’s primarily a function of the video card. However, in-game physics could be make significantly more complex such that they could do away with the need for a dedicated physics card, with the added benefit of being able to perform other math-intensive things such as creation of realistic natural elements (fire, water, trees, foliage) and so on.

3D rendering and ray tracing, video editing (specifically visual effects and post-production compression), audio processing (particularly physical modelling) and softsynths, emulation, pure math operations (number crunching) and so on would benefit greatly from massively parallel processing in a single unit without the need for building huge rendering farms.

OK. I work with a number-crunching economic model that uses specialised software. It’s more or less a big system of simultaneous equations where non-linear equations are solved by repeated linear approximation and extrapolation. The impact of policies is a deviation from a forecast of the economy through time. So to run a simulation, you first run the forecast, where numerous industries buy and sell stuff and capital stocks and debts and depreciation accumulate over time. You do this in lots of little steps for each period to eliminate linearisation errors. Then you run the policy and see what difference it makes.

With parallel processing you can work on year one of the policy while year two of the forecast is running. With lots of cores, you could run different parts of each job simultaneously. So if your method for solving a period of the model involves comparing the results from 3 independent multi-step calculations and extrapolating, you could use 3 processors to solve the first period of the model, then let those 3 processors solve year two and use another 3 to solve a deviation simulation that relies on the first year of the forecast.