Dual core versus dual processor motherboards

What are the advantages of each? Someone once told me that dual core processors offer better performance because the processors could communicate with each other more quickly, but I find that questionable. Why would dual processors still be used in servers? I spent around $1,000 just for my motherboard, two processors, and memory. Surely I couldn’t have gotten the same deal for cheaper with dual core.

The good news is that the cheaper dual-core solution didn’t really beat your dual-cpu setup in terms of performance. The bad news is that the performance is, in fact, practically the same between the two setups and the dual-core is much cheaper.

Here is a good article with graphs and everything:

http://www.pugetsystems.com/articles.php?id=23

Wow. It looks like I got hosed.

Which is “better” depends on your definition of “better” and the details of how the systems were designed and implemented. That’s why when serious money is involved, people run benchmarks that are representative of the jobs they intend to run on the computer.

You may be able to use two dual-core processors on your motherboard. That’d definitely beat one dual-core processor.

Processor slots and cores are both valid ways to boost the speed of your machine. You’ll spend more for extra slots because each upgrade will require the purchase of twice as much (or four times as much) CPU, but you’ll also be getting a huge pile of extra processing power.

There is a particular computationally-intensive process we run at my office, and its speed is processor-limited. So for this year’s capital expenditures, we bought a pair of machines to run the software on: each has a four-slot motherboard and four dual-core Opterons running at 2.4 GHz. That’s not really 19.2 GHz of processing power (there are diminishing returns) but the expense up front saves us hundreds of staff-hours per year.

More news. It looked like that article talked about the pentium D dual core processors and AMD dual core processors. The Core duo 2 processors now out from intel beat the pentium D processors of similar price ranges by quite a bit.
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=8
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=1

The question is when did you buy it? If you bought it a year ago things were different.

Is this going to get like razor blades: 1, then 2, then 3, then 4, now 5?

(I want a V-8 Hemi.)

Intel is planing to have 80 core chips in 5 years.

http://news.com.com/Intel+pledges+80+cores+in+five+years/2100-1006_3-6119618.html

Niagaras already have four.

The fact is, dual core processors are no more expensive to make than single core processors with the same number of transistors, and are actually easier to design. Plopping down an extra core is a lot easier than designing logic to use this extra capacity to improve performance. That’s really the impetus for it - using up the transistors Moore’s Law gives us without killing ourselves with long design cycles. (The alternative is using the transistors for giant caches.)

I know of quad processor server boards with dual core processors on them - not only that, but you can swap your single core board out for a dual core board, and get increased performance at minimal cost.

Oh, and communication on chip is always faster than communication off-chip - but there are many applications where processor to processor communication will be minimal, so this may not be a big win.

OK, I was being a bit facetious but why didn’t the computer/chip manufacturers start stacking processors a long time ago instead of going on the mad dash of packing all in one processor?

Admittedly, I don’t know a whole lot about processor architecture.

It’s hard to write software that makes effective use of multiple processors.

It made more economic sense to use the transistors to improve the performance of a single processor.

With multiple processors, you have to deal with cache coherency and other inter-processor communication issues.

Most modern systems have severe issues with main memory bandwidth and latency. Adding more processors just makes a bad situation worse.

Mass-market operating systems (Windows) are not designed to make efficient use of multiple processors.

Licensing issues that result in inflated costs for software on multi-processor systems.

I’d like to add that, for most everyday applications (obviously excluding gaming, intense CAD, and some others) processor speed isn’t much of a stumbling block. The hard drive is the ball and chain most often. I can’t wait until solid-state storage becomes a practical and cost-effective reality.

Yes, I have had this system for almost a year.I had checked several months before that when dual core processors were first hitting the shelves and they were far more expensive than dual processor systems. I didn’t even bother to look at them when I bought this machine, but someone pointed out that the prices had dropped a lot and were a bit cheaper than dual processors. Too bad I had already spent my money. Maybe some day I’ll buy a couple of Italy processors and have 4 cores.

At which point we will all be saying “dang, I wish these procs were faster!”

I’m not sure if by stacking you mean literally stacking, or designing dual cores. If the first, it’d the heat. So, I assume you mean the second.

Actually there have been multi-processor designs for quite a while. What we call SoCs (Systems on a Chip) are chips composed of a number of “cores” which are predesigned, large, blocks you plunk down and connect up. Typically these have one or more processors - companies like ARM and Tensilica live by providing these. You could even get open source Sparc cores from Sun.

For top of the line processors though, the need for transistors for caches, better instruction scheduling, more instructions, and for the move to 64 bits kept up or exceeded the amount available until just recently. What typically happens is that sometime during the design you discover you have put in too much, and then everything stops while you do a die diet to remove features. Remove the wrong ones and you have a disaster. It happens.

Now with the shrink to .6 micron and the move to bigger wafers you have a lot more transistors. No point to going to 128 bits, no one has good ideas about how to use transistors to make things faster, wiring lengths are beginning to kill you, and power consumption is already killing you. So, dumping another processor in is a win.

That’s the answer in a nutshell.

One factor is that speed was increasing regularly so there wasn’t a need to go multi-core. As they run into technical hurdles with increasing speed, they need to find other ways to get more throughput and multi-core is one way.

An interesting recent development is that IBM’s new Power6 processor will remain only dual core as they are having success with ramping speeds up to 5ghz.

Some of you have brushed upon an important fact about dual cores/processors – they’re speed is not cumulative. That is, having a 2.16GHz Intel Core 2 Duo isn’t the same as having a single 4.32GHz single core chip. No single thread or process will ever run faster than 2.16GHz on the dual-core machine. The big benefits of multi-core chips, though, are that a single process won’t hog all of any single CPU’s resources if the operating system knows how to schedule processes properly.

A simple example is you want to run two programs, both of which are computationally intensive but which are only single-threaded applications. On a single core, the operating system will schedule them both on the same processor, so each of them will run half as fast as either of them would run by themselves (sticklers: yeah, there’s other overhead; let’s just say 50%, shall we?). But if the OS is multiprocessor aware, it will schedule one of the applications on one core, and the other other application on the other core. Now they both run at 100%.

Most processor intensive programs, though, are written to be multi-threaded in tasks where it’s appropriate. Now the OS’s scheduler can actually make different threads run on different cores. So now if you have a dual processor Xeon each with two cores, there’re a total of four processors that any of the threads can run on. The OS appropriately schedules what is where. However, the program must be written to use multiple threads. If the program is such written, then there’s a perceived increase in speed because the tasks are running in parallel on different processors rather than competing for the same resources on a single processor.

If you’re just surfing the web and reading email, you won’t see any appreciable difference in speed unless your OS is running all kinds of background services. If you’re trying to compress a DVD to xvid as well as burn a CD as well as convert MP3’s to AAC as well as use Photoshop, then you’re likely to find that you’re not significantly slowed down in any of these tasks, but they won’t necessarily be faster than if you were only doing a single task by itself.

Stickers: yeah, there’re memory bottlenecks and other resources that have to be shared and divied up, but I’m trying to keep this basic.