Duo Core Versus Dual Core

Can someone please explain the difference to me (in pretty simple terms).

And as an added extra- does the speed of the chip indicate much when it is duo/ dual core as compared to a Pentium chip (older style).

In simple terms, a dual core chip is two processors, side by side, on one chip. When you have your computer perform multiple tasks, such as watching a video while typing a summary in your word processor, the two cores are able to split these two tasks among themselves in order to multitask more effectively. Some newer programs are designed with this function in mind, and are able to split one task into multiple processes, such as the video decoding in one core and the audio decoding in another. You would see performance improvements akin to having two processors only if you’re doing the sorts of tasks that are designed for this purpose, and some older operating systems will not show any significant gains from additional cores.

Although you did not ask, four core chips work in the same fashion, but there are even fewer programs you might run that would engage all four cores. This makes chips with four cores even more of a niche product, for now.

“Core Duo” is the same as “Dual Core”, it’s just that Intel can trademark the name “Core Duo.” (As well as “Core Solo,” “Centrino Duo,” and so on.) Intel confuses matters a bit by using variants of the same name to mean tri- and quad-core (“Core 2 Duo,” et al.)

Speed is not in and of itself an indication of the number of cores a chip contains. You pretty much have to rely on the box to tell you whether it’s dual/tri/quad/hex/googol core.

Thanks twhitt. I sort of understood that part.

My question was really about the differences between duo and dual. My computer has an Intel®Core™2CPU . A friend bought over a laptop with a “Duo Core” and I could not work out why it was so slow.

(It did have half the RAM).

Core Duo is a marketing term used by Intel to describe their dual core mobile processors. As noted by twhitt, dual core processors combine two processing cores on one piece of silicon.

The clock speed is still important, as it defines how fast each processor can operate independently. Many applications are still single-threaded, and only use the resources of a single processor. These applications only benefit from multi-core systems if the OS can allocate more CPU time to the thread. Multi-threaded applications may get benefit from multiple cores as well as raw clock speed.

Si

Thanks Mindfield. And sorry- missed your second part. Will need to read that again. I may need a translation from Hebrew. :smiley:

As Intel, AMD, and other semi-conductor companies kept figuring out how to make transistors smaller & smaller, meaning they could cram more and more of them onto a CPU of the same size, they started to run out of things to use them for -then, a couple years back, they got the bright idea to cram two whole CPU’s (or cores, as they call them) onto a single chip. This makes anything that is easy to run in parallel (like video encoding, or image editing) run much faster. This is what a dual core chip is. There are also triple & quad core chips, and Intel & AMD both have plans to throw even more cores on to their CPUs in the future.

After Intel came with a dual core version of the Pentium 4 (which are logically called Pentium D’s) their marketing folks decided that confusing their customers would be a good way to make money. They decided to name their dual core version of the Pentium M (which is completely different, and much better design than the Pentium 4 - for that matter the Pentium 3 was a better design than the Pentium 4, and the Pentium M is basically a souped-up version of the Pentium 3 ) the Core Duo. Which makes about as much sense as if Ford came out with a truck called the V8. Then Intel came out with a another new chip design, and they decided to call it the Core 2 Duo. Thankfully, the Core 2 Quad is actually a pair of Core 2 Duos. And to make things even more confusing, Intel decided that for their budget line of Core 2 Duo chips, that have less L2 cache, that they would name them Pentium Dual-Cores. And even better, Intel’s next generation chips that are coming out later this year are going to be called Core i7s.

As for performance, the Pentium Dual Core, Core Duo & Core 2 Duo chips are much faster at the same clockspeed than Pentium 4 and its dual core versions. As a rough estimate, a 2.0 ghz Core 2 Duo, 2.4ghz Pentium Dual Core, and 2.5 ghz Core Duo would be about as fast as a 3.5 ghz Pentium D.

Also note that Intel (and AMD for that matter) usually list their processors by model number these days, since there can be so much variance in how fast processors are, clock for clock. Usually higher model number = equals faster, but not always.

That is probably part of the answer. The other part is the mobile processor bit. Mobile processors can drop the processing rate to optimise battery life, and use other approaches to power saving/heat reduction that compromise performance. Core Duo have slower Front Side Bus compared to Core 2 processors. Core Duo may use some shared silicon functionality as well (I think AMD used this as a selling point for their X2 processors, with fully independent CPU cores). Finally, Core Duo uses off-chip memory management which is slower.

Si

Thanks guys. I am glad I asked this- I will go and run a warm bath now.

I have reread all this and you guys are scary with your knowledge. Thanks very much.

Multi-core processors are not necessarily on the same piece of silicon, even if they are in a single package. Many of them are two silicon chips in a single package. This tends to improve manufacturing yields at the cost of slowing down communication between the processor cores.

That kind of design is called Sip (System in Package) and as far as I’ve seen, isn’t used much for multiple CPUs but instead for packaging chips that are made with inherently different processes, like a DSP, CPU, and some RF stuff for a cellphone.

As for yields, that’s kind of iffy. Though it is true that a smaller chip has better yields, a big problem with Sip is the KGD (Known Good Die) issue. The chips that go into a SiP are never packaged - that is put into the ceramic container that we all know and love. They can only be tested on the wafer. This is done, but there are issues. You can’t do functional test at wafer level, since it is hard to get enough power to the chips because of the high density of power bumps. In the normal flow you first do wafer test, package the passing parts, then do a more extensive package test, and after that often burn-in the parts to deal with reliability problems early. You can’t do steps 2 and 3 with unpackaged parts. You can test in the SiP, but the fall out will be greater, and you risk throwing away an expensive SiP if a chip fails, and the test is usually not as good as you can do at the package level. For relatively small chips, like in cellphones, the yield is good enough so this isn’t a problem, but I wouldn’t want to package multiple CPUs this way.

In fact, the clock speed of multiple core systems isn’t going to go up as fast as single core chips. One of the big reasons for going to multiple cores is power. Chip power consumption is a function of clock speed, and it was getting harder and harder to get rid of the excess heat. Multiple cores allows the chip to give better performance while ratcheting down the clock speed. It also takes less electricity, which is why these are being marketed as green chips. The Sun Niagara 1, with 8 cores, requires less than 80 W, which is nothing these days.

Which OSs do this?

To take advantage of multiple cores, is there something that needs to be done at the BIOS/OS level?

Intel has made many CPUs that are two dies in one package - every Pentium D chip, and every Core 2 Quad.

Moreover, since we’re gettin’ all technical, 2 cores does not necessarily mean twice the speed unless applications are written to take proper advantage of multiple cores. What it does mean though is that system performance for applications that do only use one core will improve because there’s a whole other core that’s free to do a user’s bidding while the other one chugs away at whatever it’s doing. It’ll take longer doing it on one core, but at least you don’t have to fight a sluggish system while it’s doing it. On the other hand, an app that’s written to handle multiple cores can get things done in almost half the time.

They’re getting better with the clock speeds on multiple cores though, so at the very least they’re still at least trying to keep up with Moore’s Law. :slight_smile:

The question is more like “Which OS’s do not do this?”. Windows 98 / ME were the last MS operating systems that were strictly single-threaded. Win2K supported 2 CPUs, while 2K server & advanced server supported 4. (And actually, I think these limitations were controlled by registry values). Windows XP & Vista are multithreaded, as are Linux and Solaris. I don’t know about Mac OS’s, but I’d bet they are as well.

Speaking strictly “per-process”, it’s true that no individual single-threaded process will get a performance boost from multiple CPUs. But if you have 2 CPU-intensive single-threaded processes, they’ll both benefit from another CPU. They won’t have to compete with each other as much. There are applications like Google Chrome, which are a collection of (mostly) independent processes, and those apps benefit from additional cores.

Moore’s Law talks about transistor count, not clock speed. Processing speed is linked to that. In terms of transistor count we’re right on track, and in fact dual processors make it easier to use all those transistors we have available. Before they’ve been stuffed into bigger caches.

The slow down we’ve seen lately hasn’t been technical as much as economic, since the price of a fab for a new node is getting higher and higher. About the only ones left who can afford it are Intel and the Taiwanese fabs, TSMC and UMC. Similarly, there is a lot of resistance to moving to larger wafer sizes, which it seems Intel and no one else wants.

Depending on how the OS handles process/thread allocation and timesharing on a core, a multi-core system may provide an advantage for an intensive single-threaded application, or it may not. If the OS does not (or can not) dynamically move processes (particularly OS and system tasks) from a heavily loaded core to a less loaded core then your intensive process will get less cpu time than it could get if it had a whole core to itself. I know in XP you can set processor affinity for processes, but I don’t know how dynamic the cpu loading system is if you do not specifically set things.

Si

Regardless of where the thread comes from it’s going to have to yield the cpu eventually, and when it wakes up the scheduler has to decide where to send it. On any OS there’s going to be a performance hit to dispatch that thread onto a new CPU - for one the new CPU won’t have any of the thread data in cache. But it also doesn’t necessarily make sense to keep threads on the same CPU just for the sake of keeping them on the same CPU if there’s another CPU sitting idle.

I don’t know Windows OS’s as well as I know Solaris and Linux, but what I’ve found on the web suggests they behave similarly. IE - Windows “tries” to keep threads on the same CPU, but that’s not gauranteed to happen without setting the affinity. The Linux and Solaris schedulers also can, and sometimes will, redispatch a thread onto a new CPU. But it’s a balancing act between the runqueue lengths, the priorites, the CPU groups to which the process belongs, etc.