What's the SD on XP/Vista, multithreading, and Dual/Quad Core processors?

I have been telling a friend at work that I am thinking about getting a Dell XTI quad core desktop PC.
He keeps trying to put me off by saying nothing or very few things ‘use’ the four cores. Stating that they are not designed to.

My instinct tells me that any multi-threaded application is at the mercy of the OS when it comes to choosing how many ‘processors’ to use. In other words if there were twenty-seven cores the OS would divide the App between all 27 if needed. And if there are four then the app will be divided between the four.

Or to put it another way, a multi-threaded app or game will have twice as many FLOPS (be processed twice as fast) on an intel Core2 Quad core running at 2.4Ghz than on an Intel Dual Core running at the same 2.4ghz.

What is the score? WHo is right? Me or him?

And if he’s right that’s mute as eventually there will be things that use four cores and I will have the machine

I don’t know about support beyond 2 cores, but you can fine-tune XP’s handling of 2 cores (you can, IIRC, go so far as to dictate that program.exe will always use core1 or whatever, though I’m not sure why you’d want to do that), so I know that it does really support 2 cores. I would say that whether the program properly hands off CPU allocation to the OS will vary from program to program.

To my knowledge, the software has to be written to take advantage of all of the cores. The OS may be able to delegate but if the software is only written for one core, then it’s only going to use one core. A lot of games now aren’t taking full advantage of the multicore processors.

Also, only Vista Ultimate, Business, and Enterprice support multi-processor setups (as far as Vista goes).

I love my dual-core processor. I doubt that any single application I use takes advantage of them, but I very seldom am running just one application.
Instead, what I see is that my system stays responsive and useful while I am running a CPU-hog application. So I can be converting video files to burn onto a DVD while I writing or programming, and the system does not seem noticably slower than before I started the video conversion.
A quad processor would let me do multiple CPU-intensive processes at once (e.g. converting separate video segments at once). So ask yourself how often you are likely to have these kinds of inherently parallel, CPU-intensive tasks to perform.

Software does indeed have to be multiprocessor-aware in order to take advantage of the extra horsepower by itself, but that doesn’t mean that multi-core processors are useless otherwise. Windows is still able to delegate tasks to one or the other core, and as was mentioned you can set an “affinity” for a particular process – that is, tell the process to use a particular core. In this manner, you get true multitasking – the ability to process two or more instructions simultaneously. Multiprocessor-awareness is growing in software, though at the moment you are more likely to find it in late-version processor-intensive apps such as 3D design/rendering programs, audio/video editing/encoding apps, and so on. Some games are becoming multiprocessor-aware as well, particularly 3D games that tend to be very demanding of both video and processor performance. As multi-core processors become more prevalent (and since they’ve gotten quite a bit cheaper since their initial introduction, this is happening at a much faster rate) you’ll see more and more software being released and updated to take advantage of them. It a couple of years, it is unlikely that single-core processors will be available – or if they are, they would be aimed at the budget-conscious consumer in the sub-$100 range.

If the program is multiprocessing (uses multiple threads or processes internally), then it can be effectively spread over multiple cores. Some are, some aren’t, and it’s difficult to predict which ones might be. From a programming standpoint, it doesn’t make sense to create threads unless there are tasks which are useful to perform in parallel (complex computations while a user interface stays responsive is a common one). So it might be that a program would use 2 processors efficiently, but have no use for a third or a fourth one.

All of which is largely irrelevant.

Bring up task manager sometime, click the button for “show processes from all users,” and notice how many there are. At “Idle” XP might be running 60 or more processes (admittedly many of them aren’t doing anything), and Vista is even worse, since it has to spy on everything you’re doing to make sure that Hollywood doesn’t mind. These processes take up resources. Even if they’re each single processing apps, they can be run on different processors (or cores, same difference for the most part). Which means that they’re using up less of the processor that your program is running on, making it better off even if it’s single threaded.

Which is pretty much your position. A couple of caveats, though: quad processors are often slower (in Ghz) than their dual-processor counterparts. In that case you might see slower behavior on the quad. Even if they’re even, though, the more processors you can throw at a problem, the more likely it becomes that something else (memory, network speed, disk speed, video card, hamster food) becomes the bottleneck for your computing. Once that happens, the extra processors are wasted. Potentially, having more processors allows you to consume some resource so fast that ALL the processors are slowed waiting for it, whereas a smaller number of processors wouldn’t have been able to queue as many requests, and any given app would have run faster. But these are minor quibbles. All things being equal, a quad-processor will outperform a dual processor of the same speed at almost everything, but for unthreaded applications, the difference may be unnoticable.

You will certainly be able to get processes to run on all four cores but the real question is, is there any benefit to you in doing so. The answer depends on what you use your machine for but is almost certainly “no”. A multi-threaded app will only be able to use 27 cores if it has 27 threads it needed to run simultaneously, and I guarantee you have no app like that. Neither the OS nor the machine can subdivide one thread into many, so your friend is right in that respect; the utility of multiple cores is limited by how often the apps you run split a task into multiple threads which benefit from being run at the same time; two threads can run on two cores but if one of them depends on output from the other there is no benefit in doing so.

You mean “moot” (irrelevant); “mute” means “can’t speak”.

The results with my dual-core 6700 @ 2.66 GHz XP box compared with my single-core 3.2GHz XP box have been quite disheartening so far. I bought and built the dual-core box primarily for faster video rendering using Ulead Video Studio 10 with the “DivX 6.6.1 codec (2 logical CPUs)”, and it’s faster on the 3.2 GHz single-core.

I suppose this might mean that even though it’s the latest version I’m aware of, it still hasn’t been optimized for a dual or quad core processor, even though the codec name implies the dual core would be faster.

If Lobsang is working in a technical field, such a guarantee might not be entirely safe. I’ve run jobs which could have been efficiently split into thousands or tens of thousands of threads, and some of the other physicists here have had jobs which could be split into hundreds of millions.

But yeah, a word processor or web browser is only going to run somewhat faster on two, and probably wouldn’t gain any further benefit from more than two processors.

The codec name means it can make *some use at all *of a second processor, not that it will be faster. If the app and/or codec are not so designed that a majority of processes can be run in dual simultaneous threads, neither of which depends on the outcome of the other, then your 2 CPU machine will be faster. Mostly any one process will be running on one or the other CPU only - both of which are slower than your single CPU machine.

Where you may get an advantage is if you were running some other app at the same time as this one; the speed of doing the two things simultaneously will be faster on the dual CPU machine than it would be on the single.

I know this was a XP/Vista thread to start off with, but is there any way to assign priority to a core in OS X? I often get slowdowns when trying to run XP in Parallels and, while I’m sure it’s pretty much a lack of RAM, I was thinking it might be worth trying to give Parallels priority access to one core of my processor.

AFAIK, no. The kernal will schedule threads as it see fit. Get more RAM.

I had Truespace 6 running on my hyperthreaded Pentium 4 laptop and I could visually see that when rendering an image it was using two cores (it showed two lines of rendering that were kind of racing each other)

(edit: When I say ‘cores’ I mean threads. I am not sure how much right the HT P4 has to call itself dual-core)

When I installed the same app (a demo) on my newer single-thread/core (amd turion) it showed only one line (but it rendered faster because the processor is much newer faster than the Pentium 4, even at a lower clock speed)

I now have an app which uses a renderer called ‘mental ray’ which is apparently designed to use multipe cores (including ‘processor farms’) so I reckon if I shell out on a quad core then I have an app that will intensively use all four cores.

Even if not… I still find the ability to be doing many things at once while the system remains responsive appealing. At the moment on my single-core alienware laptop it can be annoying if I am merely doing two things at once.

I know it’s off topic, but how does UNIX/LINUX do with multiple cores? Can you assign various tasks to separate cores or does the OS do so automatically? I’m thinking of building a box with either multi-core CPUs or possibly multiple Xeon processors. I have a serious dislike of Vista and was thinking of a Unix OS instead.



In general use, a UNIX OS will make decisions about how to allocate processes to resources - the OS scheduler is optimised to make better and faster decisions about where something should run than an operator. There are tools for priority enhancement (eg nice). Also, UNIX apps have a longer history of running on multi-core systems, and the multithreading libraries seem to be more more developed and more likely to be used, but YMMV, depending on what you want to do.

Splitting apps into meaningful multiple independent threads is not trivial. For example, games. You would think that game developers would be all over multicore like a rash, but it’s not that easy. In a FPS, your primary CPU task is rendering the view (3D cards notwithstanding). This is a pretty linear task. You might be able to render multiple frames in different threads, but you can’t get too far ahead, because it is a realtime responsive system. You can split off things like input handling and enemy AI into seperate threads, but these are also tied into the rendering thread and have to stay tightly synchronised (again, for the realtime response). They also use much less CPU than the renderer, so there is not a balance between multiple cpus. Sometimes, trying to make tasks run independently introduces so much co-ordination overhead that the end result is a slowdown, not a performance increase.

Tasks that do suit multicore/multiple cpu systems are things like raytracing/CGI - where an image can be split into multiple independent renderers and stitched together, or frames can be distributed across multiple threads. Places like Weta and ILM use massive render farms with thousands of cores, all working full speed on small chunks of the whole.


si Blakely

Thanks for the info. I’m not really concerned about FPS games as there aren’t that many of them for UNIX anyway. I’ll be doing email, browsing, editing video, and Office type activities. Some programming as well. I just like to stay up to date on hardware and have a snappy system regardless of what demands I make on it.
The other irksome thing is that Windows (all varieties) seems to gradually slow down regardless of what I do and I’m tired of this. I know, I could scrape the disk and re-install but I feel like I shouldn’t have to be doing this.

Thanks again


You have to be a bit brave to move to Linux - there is still some bleeding with the leading edge. I’ve been Linux mostly for 6 months or so (Kubuntu) and find that almost everything works better for me. My dual core AMD works great (even running hefty VM systems) and the only frustration I have is that too many Windows systems leave removable NTFS drives dirty (and the Linux NTFS drivers will not mount the dirty FS), and a small quibble about the behaviour of the KDE SMB mapper. Oh, and hibernation/suspend failures piss me off, too.


I have a (single processor) system running Fedora 5 and love the thing. As you say though, there are a few “gotchas” here and there. The only real issues I have are the endless, niggling compatibility problems with Windows systems.

Thanks again


If you run old DOS programs, a dual core processor makes things much more responsive. In a single-core processor, a DOS program can take 50% CPU just polling for input.

It can depend on the application. The calculation I refered to with thousands of threads was basically constructing a lookup table for thousands of input values, with the calculation at each point independant of the calculation at every other point. So I could have just assigned each data point to a different processor, if I had them, and have benefitted from every one. Admittedly, running 10000 data points on 10000 processors would still not have been 10000 times faster than running them on a single processor, since some points took longer than others, and a bunch of the processors which were assigned quick points would be sitting idle waiting for the slow ones to finish. But for a mere 16 or 32 cores, I really would have gotten 16 or 32 times the speed (or very nearly so), since the processors which finished early would just grab more data points to work on.

But that’s pretty much the ideal case for multiple processors; multiple independant tasks. Some other apps might not be parallelizable at all, and some might be parallelizable, but it’d be very difficult to figure out how, and might require completely shredding the code and putting it back together in an entirely different way.