Recent articles in the press indicate that Windows 7 will be in both 32 bit and 64 bit versions. I’m wondering why. Surely it makes more sense to simply switch over entirely to 64bit. Vista is the transitional product. CPUs have been 64 bit for some years now. Surely MS are just creating more work - and problems - for themselves?
Sure it will have to run 32 bit apps, but MS has plenty of experience with thunking, and you could always run a 32-bit VM.
Have CPUs been 64 bit for some years now? I didn’t realize, thought there were just a few. Being 64 bit could mean several things: having MOV instructions that work 64 bits at a time, having other instructions that take 64 bit arguments, having 64 bit CPU registers, having a 64 bit data path between the CPU registers and internal cache, having such a path to RAM, having 64 bits of RAM address (though presumeably not all populated for many years to come). There must be others, too. Which of these things have been out there for a while?
Just a thought about 64 bit computing: There is something nice that happens when you get up to 64 bits, and I don’t think there’s any other step that will be this significant. 64 bit integers can be used for counting anything a computer needs to count. You can count a gigahertz clock and use the result for a calender that will take more than a lifetime to cycle. Such integers can be loop indices that no computer built in the next few years can ever overrun. You can forget about unsigned integers, too, because the same can be said of a 63 bit integer. And IEEE floats 64 bits long are the standard representations for noninteger mathematical values in every programming environment I know of. People will sometimes reach for 32 bit floats instead, if the space savings warrant it, but it seems to me like few bother. I personally have used 80 bit floats in programming, because that’s what the FPU is using (why throw away those expensive bits?), but it’s a big hassle. So, 64 bit arguments represent a kind of a plateau in simplicity and performance. All sorts of hassles (and opportunities to screw something up) fall by the wayside when you adopt 64 bit arguments. A system that read and wrote and moved and did operations on those arguments in assembly language would be neat and tidy in a very appealing way.
This is simply not true. Applications which require large amounts of memory benefit from 64-bit, many games consoles are 64-bit, 64-bit allows greater colour precision, PC games are becoming 64-bit (Crysis, Far Cry, Flight Simulator etc), graphics packages, video editting (want to hold a whole DVD in memory? 4.7 GB means 64-bit) etc.
Further, remember that we are talking about an OS that will be launched in 2-3 years, not tomorrow.
You might find this article about gaming performance from Corsair interesting.
All that basically boils down to one benefit, the larger address space, and maybe what Napier said about floating point numbers. I don’t understand why you mention greater colour precision. 32-bit systems are quite capable of dealing with 64-bit values, and besides, who needs 64 bits for colours?
I think we’re getting into diminishing returns territory with 64-bit CPUs. Hopefully that will be it and nobody will be trying to persuade us in five years that we must all “upgrade” to 128-bit.
Not so sure about that. By the time Windows 7 comes out, we could be looking at enough power in the basic home computer to easily run a VM without much lag at all. Especially if you could dedicate, say, three gigs of RAM to the host, one gig to the guest, and even better one or more CPU cores to each. Heck, I’ve got a dual-core MacBook with two gigs of RAM that routinely runs Parallels and I think the slowest part is the USB2 connection (since I’m running it on an old IDE external drive to save space.) It’d be better if I actually actually dedicate a core to each OS, but it still runs pretty good.