PDA

View Full Version : Explain AMD versus Intel to me


SolGrundy
01-18-2005, 08:05 PM
I was a computer programmer for over 12 years, but the hardware side of things has always been somewhat mystifying to me. I understand at a very high level the basics of what's going on with processor speeds, buses, caches, etc., but have never bothered to keep up with the terminology or the latest advances.

So now I'm in a situation where I've bought a computer based on reviews, but have no idea what the spec sheet means. In particular, the CPU:

AMD Athlon™ 64 FX-55 Processor with HyperTransport Technology

has been touted in several reviews as being the fastest available now. This is the first time I've gotten an AMD instead of Pentium processor, and I can't understand how an AMD processor clocked at 2.6 GHz can outperform a P4 clocked at 3.4 GHz. And yet I've seen benchmark after benchmark that say that it does.

Can anyone explain in layman's terms how it manages to be faster? In particular, is this some thing where it's precariously balanced to perform well on current benchmarks and Half-Life 2, but will fall apart on down the road? As expensive as the thing was, it's going to have to last me for a few years.

daffyduck
01-18-2005, 08:19 PM
The answer is really simple. There is more to "performance" than processor clock speed. Cache size, bandwidth, chipset, RAM, motherboard design, and on and on all contribute to the actual performace of the system. That is why they test systems with benchmark software to see how they actually perform in real world tests and not just by crunching the numbers.

I recommend Tom's Hardware Guide (http://www.tomshardware.com/) if you're interested in reading about the nuts and bolts of it all.

mks57
01-18-2005, 08:53 PM
Modern computer architecture can get very complex, but I will try to address clock rates. Most modern systems use synchronous logic, which is dependent on an external clock to drive the system. Other things being equal, a higher clock rate means higher performance. Things are rarely that simple. To get an idea of the useful work that can be done by a computer, you have to look at the work done per instruction, clock cycles per instruction, and the capability, if any, to execute multiple instructions in parallel. One problem is that this is very dependent on the mix of instructions used by your programs. Benchmark programs are infamous for being designed and tuned to run fast on one brand of computer and slow on competing brands of computers. To make things simpler, let's invent a unit of computational work. We'll call it the crunch. Here are two computer designs:

Computer A
clock rate = 1 GHz
clock cycles per instruction = 1
crunches per instruction = 1.5

Computer B
clock rate = 3 GHz
clock cycles per instruction = 2
crunches per instruction = 1

Which computer does more work?

The answer is that they both do the same amount of work, even though their clock rates are very different. That's why comparisons of clock rates can be misleading when comparing computers that use different designs. One of the ways to increase the clock rate of a computer is to reduce the amount of work done by each instruction. Simple instructions can execute more quickly than complex instructions.

The problem today is that computer performance is an exceptionally complex subject. Besides clock rates, you have to look at the design of the instruction set, the size, latency and speed of memory caches, the design of the memory controller, the width, latency and speed of the main memory, pipeline size, branch prediction logic, and other obscure subjects. Measuring computer performance by clock rate is like measuring the performance of an automobile engine by its maximum RPMs.

CynicalGabe
01-18-2005, 08:59 PM
While other attempt to give you serious answers explaining the megahertz myth, I will simplify it for you, in the language of my kind.

AMD 1s teh l337 rox0rz!


Intel iz 7e4 sux0rz!

Shagnasty
01-18-2005, 09:21 PM
Also, your processor is a 64 bit processor. Intel chips are still 32 bit.

Rex Fenestrarum
01-18-2005, 10:11 PM
Also, your processor is a 64 bit processor. Intel chips are still 32 bit.

Well, no. Intel's "Pentium" chips are 32-bit. Intel's "Itanium 2" chips are 64-bit.


Funny note: Intel's first line of 64-bit chips were so bad, many geeks called the "Itanics".

bump
01-18-2005, 11:09 PM
One way to think about it is like this:

MHZ (really clock speed) is something similar to the RPMs of a car. Everything else being equal, higher RPM is usually better.

However, things aren't equal. Just like in cars, you have things that change the picture. For example, engine displacement. A 454 V8 at 3000 RPM will be much more powerful than say a 1.1 liter V4 at the same speed. Also, computer controlled and fuel injected engines will be better than carbureted ones all else being equal. Or a car with better gearing will be better than another if everything else is equal.

That's what's happening here- Intel has a higher clock speed, but AMD has a design that lets them do the same thing in fewer cycles than Intel.

snorlax
01-19-2005, 08:43 AM
Is there a (real or perceived) difference between AMD and Intel in terms of the applications they are best suited for? For example, gaming and video editing are generally considered CPU-intensive. Would you choose one make of CPU over the other to do these sorts of tasks?

Squee
01-19-2005, 09:02 AM
As a very general rule, AMD is better for gaming. Intel is better at video encoding.

It really depends on the processors you are trying to compare......if you have more info, I can give you links to various benchmarks, etc.

Kinthalis
01-19-2005, 09:06 AM
Is there a (real or perceived) difference between AMD and Intel in terms of the applications they are best suited for? For example, gaming and video editing are generally considered CPU-intensive. Would you choose one make of CPU over the other to do these sorts of tasks?

Yes, Typically AMD will outperform Intel in the 3D gaming arena, while a Pentium 4 system will usually outperform an AMD one when it comes to encoding and processing video.

joemama24_98
01-19-2005, 09:16 AM
You should also consider what each processor's chipset will support. I believe the current AMD chipsets are only now coming with higher bandwidth unregister RAM and SLI options (if you wanted to utilize two video-cards to optimize gaming performance).

You may want to wait a month or two - I believe your motherboard options will increase significantly on the AMD platform.

rabbit
01-19-2005, 09:54 AM
Id also like to add the disclaimer here that given the current sheer POWER of modern systems, asking which one is "better" for something over another is really pointless unless you're talking about essentially full time activities.
I mean, a 3D game is more dependant on the graphic card and as far as which proc is better for encoding, even if one is better than the other, is there really all that much difference in terms of time spent? I mean, how much difference does it actually translate to in terms of TIME?

AHunter3
01-19-2005, 09:56 AM
Ars Technica tends to have very good articles on different processors and how they work, so well-written that even someone with no background whatsoever in such things can follow along and absorb a lot of it.

I'd be astonished if they didn't have at least one doing a direct comparison of an Intel chip and a comparable AMD. They once did such an article (http://arstechnica.com/cpu/1q00/g4vsk7/g4vsk7-1.html) comparing the PowerPC G4 to the AMD, for example.

Rex Fenestrarum
01-19-2005, 10:54 AM
Is there a (real or perceived) difference between AMD and Intel in terms of the applications they are best suited for? For example, gaming and video editing are generally considered CPU-intensive. Would you choose one make of CPU over the other to do these sorts of tasks?

Well, AFAIK, Intel processors are still the kings of the hill when it comes to servers. IT people are a conservative bunch and don't want to take risks when it comes to their companies' web or email servers. Intel's Xeon processors (and the forthcoming Itanium 2) also have much more L2 cache than either Pentium or AMD processors, and that's good for server-based apps (like databases and number crunching).

AMD *is* making inroads in this market, but they don't have nearly the penetration there that they do on the desktop.

Whack-a-Mole
01-19-2005, 11:11 AM
I mean, a 3D game is more dependant on the graphic card and as far as which proc is better for encoding, even if one is better than the other, is there really all that much difference in terms of time spent? I mean, how much difference does it actually translate to in terms of TIME?

It can make a noticeable difference IF you are very in to gaming or video encoding.

The difference in video encoding times may run into minutes if you are working with heavy-duty files. In gaming framerate is king and the newest games today still outstrip what the fastest computers can handle well. If you want all the bells and whistles turned on (shadows, reflections, antialiasing, etc.) an AMD machine will get you a better experience.

However, if you are like most users and generally just surf the web, rip a CD on occasion and send e-mail the difference between the two will not be noticeable (or at least certainly not worth remarking upon).

As for servers I wouldn't mind an AMD chip inside but the choices for such things from major manufacturers is seriously limited. As such ALL servers I install are Intel machines. The options available in getting the machine you need with an Intel chip inside far outstrip what you can find with AMD inside.

That said all the PCs I buy for personal use have AMD inside and I won't go back to Intel if I can ever help it. It started when Intel tried to stuff Rambus down our throats and pissed me off. Once I got on AMD I loved them and have been a convert ever since. FWIW I am an avid gamer so it is a good match.

Whack-a-Mole
01-19-2005, 11:15 AM
Forgot to mention...

The 64-bit chips run as 32-bit chips in most cases today. To take advantage of the 64-bit aspect you need a 64-bit operating system (Microsoft is close to getting one out I think) and the software ALSO needs to be written to take advantage of the 64-bit architecture.

That said the 64-bit AMD chip is still speedy as hell (I have one) and hopefully gives you some investment protection by being ready to go for whenever the 64-bit software starts landing on us.

Voyager
01-19-2005, 11:29 AM
Well, no. Intel's "Pentium" chips are 32-bit. Intel's "Itanium 2" chips are 64-bit.

Funny note: Intel's first line of 64-bit chips were so bad, many geeks called the "Itanics".
Humph. The term was coined by Mike Flynn of Stanford, and picked up by the Register (http://www.theregister.co.uk/) (among others, I'm sure). I always call it Itanic, but I get to, having worked on Merced, the Intel code name for Itanic 1, until jumping ship for what are now obvious reasons.

The Itanic, like Sparc and Power PC, is inherently 64 bits. The AMD chips and new Intel x86 chips which are coming are 32 bits with 64 bit datapaths. Also, the Itanic has a new instruction set.

RandomLetters
01-19-2005, 01:28 PM
Well, AFAIK, Intel processors are still the kings of the hill when it comes to servers. IT people are a conservative bunch and don't want to take risks when it comes to their companies' web or email servers. Intel's Xeon processors (and the forthcoming Itanium 2) also have much more L2 cache than either Pentium or AMD processors, and that's good for server-based apps (like databases and number crunching).

AMD *is* making inroads in this market, but they don't have nearly the penetration there that they do on the desktop.

Well, the Opteron series of processors - designed for workstation & server use - are dang good, and usually beat Xeons at various server benchmarks. Also, due to the integrated memory controller, Opterons scale much better as you add processors. Here is a pretty good review comparing Opteron and Xeon machines. (http://www.aceshardware.com/read.jsp?id=60000275)

bump
01-19-2005, 11:10 PM
Wasn't there some difference between Xeons and Opterons with respect to the 32 bit backward compatibility? I seem to remember that the Opterons had better functionality in that regard, and that it was a big part of why Opterons were doing so well versus the Xeons.

Shalmanese
01-19-2005, 11:56 PM
Maybe even 1 or 2 years ago, CPU choice may have been important for gaming but not today. WIth even the most demanding games out there, by the time you have a graphics card powerful enough to even start being CPU limited, your already performing so well that any increase in performance is negligble.

From my perspective, theres 3 distinct sections to the CPU product curve. Theres almost obsolete processors where price increases very slowly for Mhz increase, then theres the mid-range where price seems to increase fairly linearly, and then theres the high end where prices skyrocket for miniscule differences in performace. My advice for the last 5 years has always to go for the point where low end meets mid range.

Whack-a-Mole
01-20-2005, 12:33 AM
Maybe even 1 or 2 years ago, CPU choice may have been important for gaming but not today. WIth even the most demanding games out there, by the time you have a graphics card powerful enough to even start being CPU limited, your already performing so well that any increase in performance is negligble.

Not true.

Graphic cards are certainly very important but so is the CPU (other factors count too such as memory speed).

Check out the benchmarks below. Same systems as much as possible (same graphics card) and yet you see a clear increase in framerate as the CPU speed scales up.

Tom's Hardware Guide (http://www6.tomshardware.com/cpu/20041019/athlon64_4000-08.html#opengl)

akifani
01-20-2005, 03:10 AM
This is offtopic but it hasn't been mentionned and I believe it is important.

For as many years as I can remember, AMD processors have beens cheaper than Intel processors of equivalent performance. I'm no fanboy (as a matter of fact, I think fanboys should all be enslaved and deported to salt mines)

HMS Irruncible
01-20-2005, 03:39 AM
Check out the benchmarks below. Same systems as much as possible (same graphics card) and yet you see a clear increase in framerate as the CPU speed scales up.
Tom's Hardware Guide (http://www6.tomshardware.com/cpu/20041019/athlon64_4000-08.html#opengl)

Minor quibble there... what you see is higher framerate numbers. Considering that 100fps is quite a satisfying gaming experience and 150fps is blazing, can anyone really tell the difference between 200fps and 240fps?

I know, I know, the number drops as the games get more resource intensive over the years, but you see my point.

Kinthalis
01-20-2005, 06:44 AM
Minor quibble there... what you see is higher framerate numbers. Considering that 100fps is quite a satisfying gaming experience and 150fps is blazing, can anyone really tell the difference between 200fps and 240fps?

I know, I know, the number drops as the games get more resource intensive over the years, but you see my point.

It's not just that they become more resource intensive over the years, they do this over a single gaming session.

As you can see in those charts, increasing the resolution alone has an impact ont he average FPS. I'm not sure what the grahics settings for the games were, but I doubt everything was turned up to the max.

Every time you add some graphical element the FPS will go down. Add pixel shading, high quality shadows, 4x AA, 8X Anisotropic filtering, reflections, etc, etc and your AVERAGE framerate will no longer be 200+. What's more there will be times in your game where things will move fast, more polygons than average will be displayed (more enemies on the screen for example), and at those times too your FPS will drop considerably, this is the time when those extra 100+ FPS will make a huge difference in your gaming experience.

Earthworm Jim
01-20-2005, 07:48 AM
The Itanic, like Sparc and Power PC, is inherently 64 bits. The AMD chips and new Intel x86 chips which are coming are 32 bits with 64 bit datapaths. Also, the Itanic has a new instruction set.
Cite please? What does this mean, that A is "inherently 64 bit" while B is "32 bit with 64 bit datapaths"?

Thank you,

Derleth
01-20-2005, 10:34 AM
The 64-bit chips run as 32-bit chips in most cases today.Depends on the chip and the software it has inherited. The vast majority of software for the x86 ISA is 32-bit these days, but that's due to historical reasons. I don't think there is any 32-bit Alpha code, given that the Alpha architecture has always been 64-bit.

To take advantage of the 64-bit aspect you need a 64-bit operating system (Microsoft is close to getting one out I think) and the software ALSO needs to be written to take advantage of the 64-bit architecture.Linux takes advantage of various 64-bit chips just fine, incidentally, including the aforementioned Alpha.

Voyager
01-20-2005, 12:03 PM
Cite please? What does this mean, that A is "inherently 64 bit" while B is "32 bit with 64 bit datapaths"?

Thank you,
Here is a link to the Opteron datasheet (www.amd.com/us-en/assets/content_type/ white_papers_and_tech_docs/23932.pdf)

The x86 architecture has 32 bit integer registers and I believe a 32 bit path to the L1 data cache. The Opteron architecture has added 64 bit integer registers and a 64 bit bus to the cache, while retaining compatibility with all the 32 bit instructions. (It appears the floating point registers have been scaled up in a similar fashion.) The ALUs would have to be upgraded to 64 bits also. Itanium does not have native support for the x86 instruction set. What the first version did was to translate x86 instructions into Itanium "microinstructions" and dispatch them. This took a lot of silicon area, and when all was said and done was very inefficient. I saw a report that the x86 instructions ran at the equivalent of 100 MHz. I believe that they later moved to a software translation method, like that used by Alpha, which was actually more efficient than the hardware. I assume that later versions of Itanium does this better; but I haven't seen any benchmarks.

Probably the biggest benefit of 64 bits is address space. I have used CAD applications for large designs that just don't plain fit in 32 bit mode. (I use Sun machines, which went to 64 bits ages ago, but the software has 32 and 64 bit modes.) Some applications can make use of all 64 bits of data also.

I hope that answers the question. Conceptually is easy, but getting the implementation correct and manufacturable is the tricky part, which it seems they did a real good job on.

Shalmanese
01-22-2005, 01:09 PM
Not true.

Graphic cards are certainly very important but so is the CPU (other factors count too such as memory speed).

Check out the benchmarks below. Same systems as much as possible (same graphics card) and yet you see a clear increase in framerate as the CPU speed scales up.

Tom's Hardware Guide (http://www6.tomshardware.com/cpu/20041019/athlon64_4000-08.html#opengl)

Read my statement again. In order to SHOW differences in processors, sites need to resort to benchmarks of games that are over 4 years old, at ridculously low resolutions and at fantastically insane frame rates.

Take a look at some of the CPU scaling charts for HL2 at Anandtech, Anything less than about a X600 is completely graphics card limited and anything above can do just about 60fps no matter what CPU you use.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright © 2018 STM Reader, LLC.