Explain AMD versus Intel to me

Not true.

Graphic cards are certainly very important but so is the CPU (other factors count too such as memory speed).

Check out the benchmarks below. Same systems as much as possible (same graphics card) and yet you see a clear increase in framerate as the CPU speed scales up.

Tom’s Hardware Guide

This is offtopic but it hasn’t been mentionned and I believe it is important.

For as many years as I can remember, AMD processors have beens cheaper than Intel processors of equivalent performance. I’m no fanboy (as a matter of fact, I think fanboys should all be enslaved and deported to salt mines)

Minor quibble there… what you see is higher framerate numbers. Considering that 100fps is quite a satisfying gaming experience and 150fps is blazing, can anyone really tell the difference between 200fps and 240fps?

I know, I know, the number drops as the games get more resource intensive over the years, but you see my point.

It’s not just that they become more resource intensive over the years, they do this over a single gaming session.

As you can see in those charts, increasing the resolution alone has an impact ont he average FPS. I’m not sure what the grahics settings for the games were, but I doubt everything was turned up to the max.

Every time you add some graphical element the FPS will go down. Add pixel shading, high quality shadows, 4x AA, 8X Anisotropic filtering, reflections, etc, etc and your AVERAGE framerate will no longer be 200+. What’s more there will be times in your game where things will move fast, more polygons than average will be displayed (more enemies on the screen for example), and at those times too your FPS will drop considerably, this is the time when those extra 100+ FPS will make a huge difference in your gaming experience.

Cite please? What does this mean, that A is “inherently 64 bit” while B is “32 bit with 64 bit datapaths”?

Thank you,

Depends on the chip and the software it has inherited. The vast majority of software for the x86 ISA is 32-bit these days, but that’s due to historical reasons. I don’t think there is any 32-bit Alpha code, given that the Alpha architecture has always been 64-bit.

Linux takes advantage of various 64-bit chips just fine, incidentally, including the aforementioned Alpha.

Here is a link to the Opteron [datasheet](www.amd.com/us-en/assets/content_type/ white_papers_and_tech_docs/23932.pdf)

The x86 architecture has 32 bit integer registers and I believe a 32 bit path to the L1 data cache. The Opteron architecture has added 64 bit integer registers and a 64 bit bus to the cache, while retaining compatibility with all the 32 bit instructions. (It appears the floating point registers have been scaled up in a similar fashion.) The ALUs would have to be upgraded to 64 bits also. Itanium does not have native support for the x86 instruction set. What the first version did was to translate x86 instructions into Itanium “microinstructions” and dispatch them. This took a lot of silicon area, and when all was said and done was very inefficient. I saw a report that the x86 instructions ran at the equivalent of 100 MHz. I believe that they later moved to a software translation method, like that used by Alpha, which was actually more efficient than the hardware. I assume that later versions of Itanium does this better; but I haven’t seen any benchmarks.

Probably the biggest benefit of 64 bits is address space. I have used CAD applications for large designs that just don’t plain fit in 32 bit mode. (I use Sun machines, which went to 64 bits ages ago, but the software has 32 and 64 bit modes.) Some applications can make use of all 64 bits of data also.

I hope that answers the question. Conceptually is easy, but getting the implementation correct and manufacturable is the tricky part, which it seems they did a real good job on.

Read my statement again. In order to SHOW differences in processors, sites need to resort to benchmarks of games that are over 4 years old, at ridculously low resolutions and at fantastically insane frame rates.

Take a look at some of the CPU scaling charts for HL2 at Anandtech, Anything less than about a X600 is completely graphics card limited and anything above can do just about 60fps no matter what CPU you use.