32 or 64 bit Windows?

The problem with XP 64-bit was that it just wasn’t ready. This was their first real foray into 64-bit, and it was plagued with problems. And then you had compatibility problems due to 64-bit drivers not being mainstream. It was a lousy OS for consumers.

Similarly, the big jump to Vista which reinvented a lot of things, even for 32-bit, wound up not being fully ready. You had driver problems in both. And Microsoft assumed computers would be more powerful than they were, meaning Vista on a typical Windows XP machine ran too slow. And, even on more powerful machines, Vista seemed slow because it didn’t optimize 2D graphics.

That’s why XP held on. It wasn’t some decision by Microsoft. They just stuck with the market, which wanted reliable XP over unreliable and slower Vista. Microsoft did their best to make Vista better, and probably succeeded towards the end, but it was too late.

With Windows 7, however, I will say they made a mistake. There was no easy way to upgrade from Windows XP to Windows 7, even just the 32-bit version. You would lose all your programs if you upgraded. If your computer already worked well enough, you weren’t going to spend $100 and then have to completely set things up again.

They basically had to wait until the computers running XP weren’t good enough or died out for Win7 to take off enough that they could shut down XP. To be honest, it was probably the propagation of 64-bit programs that needed more memory that did it.

That’s why, when developing Windows 8.1 and Windows 10, Microsoft wised up. They made sure you could upgrade from Windows 7, their beloved product before then. And, with W10, they gave it away free to try and get people to try it. (Not that W10 didn’t have its own problems, since they were trying out a whole new paradigm. I’m still waiting on some of them to be fixed, like those stupidly time-consuming 4GB updates. And just nagging people turns them off, even if W10 would be perfectly find for their needs.)

The only new devices created with those low specs these days are tablets, which are designed for minimal power and portability. They’ve basically taken the place of the “netbook,” or low powered laptop.

Unless the computer comes with an 32-bit OS, or you are using an older computer, you don’t need 32-bit Windows anymore.

Yes, there are some legacy 16-bit programs that won’t run on 64-bit Windows. But these are extremely rare anymore. The main place where you still find them are old games, but then you’re better off using a VM or an emulator like DOSBox.

I’m sure that, between all of us, we can even help you get some old game up and running if you want. Just go with 64-bit.

One con I’ve not seen mentioned–if you are upgrading a 32-bit version of Windows, then you will only get to keep your stuff if you upgrade to a 32-bit version of Windows–specifically, you’ll need to go to Windows 7 32-bit. That is the “old PC” exception I mentioned.

That said, it may be worth it to upgrade completely to 64-bit, and start fresh. Systems without all the junk from before tend to run faster. It’s just a pain installing things again. I suggest doing so as you need it, rather than spending time doing it all at once.

I hate to keep contradicting things and injecting actual real-world facts, but, again, that’s just simply not true. The system I mentioned upthread – 4 GB memory, Core 2 Duo – is running 64-bit Windows 7 and gets a WEI score of 5.9. To put this in perspective, the scores ranged from 1.0 to 5.9 in Vista and then with some tweaks and the availability of fancier hardware the upper limit was raised to 7.9 with the release of Win 7. 5.9 remains a very strong performance score for that 64-bit OS.

Techradar stated the following about the Windows 7 score:
The memory is another stunning factor, we tested the fastest memory available running over a triple channel. But the WEI is well known for imposing artificial limits on certain scores, memory being a particularly limiting one. Essentially 5.9 is the highest you can score with less than 4GB of memory and from what we can evaluate, a stunning 8GB will be required to score over 7. And even so it’s going to have to be fast memory to make the grade.
It turns out that they’re wrong, too, but only about how memory affects it.

In any case, the assertion that 64-bit Windows needs more than a single or dual-core processor to perform well is simply not true for Windows 7, though for all I know Win 8 and 10 might have become even more bloated.

I have an HP stream 11" laptop, Celeron cpu, 2gb ram, 32bit hard drive running Win 8.1. It’s functionally a full Windows 64bit installation that I use for dinking around on the couch. It runs fine. Not great - but surprisingly decent. It’s pretty much the perfect student laptop. It runs Office 2016 without any trouble. It played Darkest Dungeons while I was layed up with sciatica. It took forever to load but the game ran fine once it started.

Somebody at Microsoft did a kickass job at making Win 8.1 & Windows 10 run well on modest hardware.
As far as the OP goes - no, there is no reason to prefer a 32bit operating system for a general purpose machine.

I was referring to the OP’s question when a 32bit OS might be more suitable than a 64bit OS, which means mostly older lower powered computers (upgrades/refurbs), although there are still NEW laptops being sold with a Celeron and 2GB RAM running on Win10 64bit version - they’re cheap, but otherwise useless.
If your PC can run 64bit smooth, then there is NO reason to run with a 32bit OS.

People don’t use doorstops?

I was NOT referring to YOUR computer! :smack:

May I draw your attention to the highlighted words, are the important ones - LOW performance & 2GB Ram

I mainly was referring to OLD computers - after all many of the current Intel mobile i5 & i7 chips are also only DUAL cores, but run very good on 64bit Win7/8/10

I’m not sure where you get your real world experience here, but a Pentium D with 2GB RAM is dog slow with a 64bit OS - working, but slow - yet with the 32bit version runs okay’ish with Win7. The same is valid with many AMD AthlonII’s.
And YES, they are both DUAL cores.

Running a Intel Celeron or an AMD E1 1Ghz with 2GB RAM, just runs terrible slow on Win7/8/10 64bit - you may wonder why people do that, the answer is - they’re cheap.

LOL, that’s what those computers are, but sadly they do buy them and use them. However, the worst thing of that scenario is, that those people bring them in to my shop to get them fixed.

There are a number of fundamental differences between 32 and 64 bit operation.

The 32 bit version can only address up to 2GB of memory, no matter how much memory you have, for any individual running process. This can be a significant problem if you have software that is memory hungry. However in order to take advantage of the larger virtual memory space afforded by a 64 bit OS your software must also be 64 bit. Running legacy 32 bit software on a 64 bit machine is still limited in addressable memory.

As noted above, a number of important additional features of newer x86 machines are only available in 64 bit mode. Security additions especially.

When you use a 64 bit x86 with 32 bit software you are restricted to the old 32 bit ISA. This makes a substantial difference to the potential performance of the processor. The 64 bit ISA has twice as many registers (and they are twice as wide, but this is less important.) Either running a 32 bit OS (when you can’t run 64 bit code) or running 32 bit code on a 64 bit OS, leaves you with only being able use the basic register set. The difference in performance between the 32 bit ISA and the 64 bit one can be of the order of 50% better (sometimes more, sometimes less.)

However - in a 64 bit environment pointers are 64 bits. This means that they occupy more memory. Overall you see more pressure on memory use because of this - which can eat into performance. But more importantly you will see more pressure on the data caches, and the performance can be compromised noticeably for some codes. There is a mixed 64bit 32bit pointer binary capability, but I don’t know how much software has actually been compiled in this form.

In general - it depends. But the performance advantages of the 64 bit ISA can be quite substantial.

Depending on the OS, it’s more like a “2 to 4GB” limit for a 32bit OS.
Cite

Win7 32bit will address up to 3.75GB of your Ram. If you do have 8GB of ram, you can only address 3.75GB of it.

The 64 bit OS has several advantages over the 32bit OS, but it requires more computing power to begin with, once you crossed that bridge it’s a better choice.

yes but 32-bit Windows uses a 2/2 process/kernel split; any given process can at most access 2 GB of virtual memory.

usually a 32-bit process is still limited to 2 GB virtual memory on 64-bit Windows. If it’s “large address aware” you can set a flag at runtime to allow it to use a full 4 GB of virtual memory.

Ok, sure… what’s your point?
If you’re only have 1 or 2 GB Ram, it’s kinda pointless worrying about addressing more… virtual or any other way… that’s all going into a pagefile on your HDD anyhow.

32bit OS at this should only be used if you must for whatever reason or your hardware is just not good enough to run 64bit properly.

because Francis Vaughan was talking about the per-process limit, and you responded with what the OS could address. two different things.

32-bit versions of Windows used to be able to address up to 64 GB of memory via Physical Address Extension (PAE,) but that was disabled on non-server versions of Windows because nvidia made stupid assumptions in their graphics drivers which were shitting the bed if more than 4 GB of RAM was installed, and blue-screening systems.

linux already had a 64 bit edition for Alpha in ??? 1995 ???.

There was a 64 bit kernel for x86-64 in 2001, but the x86-64 was vapour-ware at that point: it wasn’t available until 2003. The 2001 64-bit linux kernel ran on a simulator.

MS released 64 bit WinXP for 's IA-64 soon after (in the same year). It was a real product, for real machines… but the real machines were IA-64, so it was only appealing to people looking for large-memory servers.

So at the turn of the century, your choice was an old linux Alpha kernel, or a new Windows IA-64 kernel. By 2006, you had a choice of linux or Windows for x86-64

MIPS. All MIPS III and onward (R4xxxx onward) were 64 bit machines. Irix 6.0 supported 64 bit systems and was introduced in 1994. The SGI Origon 2000 machines could provide massive memory and up to 128 processors in 1997.

Sparc - from V9 onwards. Introduced in 1995. The E series machines provided massive memory and other features, and up to 64 processors.

PA-Risc 2.0. Introduced 1996.

MIPS and Sparc still ship, PA-Risc of course got eaten by Itanium.

Doubling the word size has real costs, and the benefits of going to 128 are thin at the moment (possibly forever, given certain physical limits). When you double the size of the word, all your buses and caches and code become twice as big (not exactly, but close enough for this rough description). The major advantage is that you can address more memory and have a larger instruction space.

But we’ve already got quite a lot. 32-bit processors can only address 4GB of memory. That’s a real limit at our current level of manufacturing and computing. But 64-bit doesn’t give you the ability to address twice as much memory, it gives you the ability to address 4 billion times as much memory. Building a computer that needs to address more memory than that would mean $10 billion in RAM alone (horrible back of the envelope estimate). This is not likely to happen any time soon. It might not happen ever unless we figure out some new physics that lets us effectively store information much more densely.

Modern processors don’t even support the full 64 bits anyway. A current Intel CPU can address something like 47 bits of memory. They add a bit every year or so to keep up with growth in capacity, but there’s no point in doing more than that. Only once we get close to 64 bits (decades off at the least) does it make sense to go to 128.

Pointers go to 128 bits (in principle), so use of memory increases for those parts of code that contain manifest constant pointers and data grows to accommodate pointers. It is way short of a doubling, but the pressure on caches is important. Caches stay the same size, they are are built from lines, where the line is typically 128 bytes. Pressure on the caches increases in the same way pressure on memory increases.

It is important to separate the data width from the address width. There is actually nothing that says the two even have to be the same at any level. Regularity in instruction set architecture usually makes them so, but historically (and even now) the relationship between the two is only weak.

It remains very common to write code that only uses 32 bit integer and floating point values, yet runs in a 64bit address space. It is also quite possible, although rarer to write code that uses the 64bit ISA, but is restricted to use 32 bit wide addresses. The x86 is quite happy to do this, and compilers will oblige. Code still runs happily in a 64 bit environment, has the performance advantages the x86-64 ISA provides, but has no bigger memory footprint than conventional 32 bit code.

The ability to use 64 bit virtual addresses allows for lots of freedom in laying out memory. As noted, actual ability to address that much physical memory is a different thing, and processors are designed to only provide address wires for the mount of memory that can feasibly be used, not the entire 64 bits.

untrue; ever since the Pentium Pro, 32-bit x86 could address up to 64 GB of memory using physical address extension (PAE.)

also worth noting is that amd64 (x86-64) can never actually have full 64-bit virtual memory addressing since it reserves bit 63 as the no-execute (NX) flag.