Explain AMD versus Intel to me

I was a computer programmer for over 12 years, but the hardware side of things has always been somewhat mystifying to me. I understand at a very high level the basics of what’s going on with processor speeds, buses, caches, etc., but have never bothered to keep up with the terminology or the latest advances.

So now I’m in a situation where I’ve bought a computer based on reviews, but have no idea what the spec sheet means. In particular, the CPU:

AMD Athlon™ 64 FX-55 Processor with HyperTransport Technology

has been touted in several reviews as being the fastest available now. This is the first time I’ve gotten an AMD instead of Pentium processor, and I can’t understand how an AMD processor clocked at 2.6 GHz can outperform a P4 clocked at 3.4 GHz. And yet I’ve seen benchmark after benchmark that say that it does.

Can anyone explain in layman’s terms how it manages to be faster? In particular, is this some thing where it’s precariously balanced to perform well on current benchmarks and Half-Life 2, but will fall apart on down the road? As expensive as the thing was, it’s going to have to last me for a few years.

The answer is really simple. There is more to “performance” than processor clock speed. Cache size, bandwidth, chipset, RAM, motherboard design, and on and on all contribute to the actual performace of the system. That is why they test systems with benchmark software to see how they actually perform in real world tests and not just by crunching the numbers.

I recommend Tom’s Hardware Guide if you’re interested in reading about the nuts and bolts of it all.

Modern computer architecture can get very complex, but I will try to address clock rates. Most modern systems use synchronous logic, which is dependent on an external clock to drive the system. Other things being equal, a higher clock rate means higher performance. Things are rarely that simple. To get an idea of the useful work that can be done by a computer, you have to look at the work done per instruction, clock cycles per instruction, and the capability, if any, to execute multiple instructions in parallel. One problem is that this is very dependent on the mix of instructions used by your programs. Benchmark programs are infamous for being designed and tuned to run fast on one brand of computer and slow on competing brands of computers. To make things simpler, let’s invent a unit of computational work. We’ll call it the crunch. Here are two computer designs:

Computer A
clock rate = 1 GHz
clock cycles per instruction = 1
crunches per instruction = 1.5

Computer B
clock rate = 3 GHz
clock cycles per instruction = 2
crunches per instruction = 1

Which computer does more work?

The answer is that they both do the same amount of work, even though their clock rates are very different. That’s why comparisons of clock rates can be misleading when comparing computers that use different designs. One of the ways to increase the clock rate of a computer is to reduce the amount of work done by each instruction. Simple instructions can execute more quickly than complex instructions.

The problem today is that computer performance is an exceptionally complex subject. Besides clock rates, you have to look at the design of the instruction set, the size, latency and speed of memory caches, the design of the memory controller, the width, latency and speed of the main memory, pipeline size, branch prediction logic, and other obscure subjects. Measuring computer performance by clock rate is like measuring the performance of an automobile engine by its maximum RPMs.

While other attempt to give you serious answers explaining the megahertz myth, I will simplify it for you, in the language of my kind.

AMD 1s teh l337 rox0rz!
Intel iz 7e4 sux0rz!

Also, your processor is a 64 bit processor. Intel chips are still 32 bit.

Well, no. Intel’s “Pentium” chips are 32-bit. Intel’s “Itanium 2” chips are 64-bit.
Funny note: Intel’s first line of 64-bit chips were so bad, many geeks called the “Itanics”.

One way to think about it is like this:

MHZ (really clock speed) is something similar to the RPMs of a car. Everything else being equal, higher RPM is usually better.

However, things aren’t equal. Just like in cars, you have things that change the picture. For example, engine displacement. A 454 V8 at 3000 RPM will be much more powerful than say a 1.1 liter V4 at the same speed. Also, computer controlled and fuel injected engines will be better than carbureted ones all else being equal. Or a car with better gearing will be better than another if everything else is equal.

That’s what’s happening here- Intel has a higher clock speed, but AMD has a design that lets them do the same thing in fewer cycles than Intel.

Is there a (real or perceived) difference between AMD and Intel in terms of the applications they are best suited for? For example, gaming and video editing are generally considered CPU-intensive. Would you choose one make of CPU over the other to do these sorts of tasks?

As a very general rule, AMD is better for gaming. Intel is better at video encoding.

It really depends on the processors you are trying to compare…if you have more info, I can give you links to various benchmarks, etc.

Yes, Typically AMD will outperform Intel in the 3D gaming arena, while a Pentium 4 system will usually outperform an AMD one when it comes to encoding and processing video.

You should also consider what each processor’s chipset will support. I believe the current AMD chipsets are only now coming with higher bandwidth unregister RAM and SLI options (if you wanted to utilize two video-cards to optimize gaming performance).

You may want to wait a month or two - I believe your motherboard options will increase significantly on the AMD platform.

Id also like to add the disclaimer here that given the current sheer POWER of modern systems, asking which one is “better” for something over another is really pointless unless you’re talking about essentially full time activities.
I mean, a 3D game is more dependant on the graphic card and as far as which proc is better for encoding, even if one is better than the other, is there really all that much difference in terms of time spent? I mean, how much difference does it actually translate to in terms of TIME?

Ars Technica tends to have very good articles on different processors and how they work, so well-written that even someone with no background whatsoever in such things can follow along and absorb a lot of it.

I’d be astonished if they didn’t have at least one doing a direct comparison of an Intel chip and a comparable AMD. They once did such an article comparing the PowerPC G4 to the AMD, for example.

Well, AFAIK, Intel processors are still the kings of the hill when it comes to servers. IT people are a conservative bunch and don’t want to take risks when it comes to their companies’ web or email servers. Intel’s Xeon processors (and the forthcoming Itanium 2) also have much more L2 cache than either Pentium or AMD processors, and that’s good for server-based apps (like databases and number crunching).

AMD is making inroads in this market, but they don’t have nearly the penetration there that they do on the desktop.

It can make a noticeable difference IF you are very in to gaming or video encoding.

The difference in video encoding times may run into minutes if you are working with heavy-duty files. In gaming framerate is king and the newest games today still outstrip what the fastest computers can handle well. If you want all the bells and whistles turned on (shadows, reflections, antialiasing, etc.) an AMD machine will get you a better experience.

However, if you are like most users and generally just surf the web, rip a CD on occasion and send e-mail the difference between the two will not be noticeable (or at least certainly not worth remarking upon).

As for servers I wouldn’t mind an AMD chip inside but the choices for such things from major manufacturers is seriously limited. As such ALL servers I install are Intel machines. The options available in getting the machine you need with an Intel chip inside far outstrip what you can find with AMD inside.

That said all the PCs I buy for personal use have AMD inside and I won’t go back to Intel if I can ever help it. It started when Intel tried to stuff Rambus down our throats and pissed me off. Once I got on AMD I loved them and have been a convert ever since. FWIW I am an avid gamer so it is a good match.

Forgot to mention…

The 64-bit chips run as 32-bit chips in most cases today. To take advantage of the 64-bit aspect you need a 64-bit operating system (Microsoft is close to getting one out I think) and the software ALSO needs to be written to take advantage of the 64-bit architecture.

That said the 64-bit AMD chip is still speedy as hell (I have one) and hopefully gives you some investment protection by being ready to go for whenever the 64-bit software starts landing on us.

Humph. The term was coined by Mike Flynn of Stanford, and picked up by the Register (among others, I’m sure). I always call it Itanic, but I get to, having worked on Merced, the Intel code name for Itanic 1, until jumping ship for what are now obvious reasons.

The Itanic, like Sparc and Power PC, is inherently 64 bits. The AMD chips and new Intel x86 chips which are coming are 32 bits with 64 bit datapaths. Also, the Itanic has a new instruction set.

Well, the Opteron series of processors - designed for workstation & server use - are dang good, and usually beat Xeons at various server benchmarks. Also, due to the integrated memory controller, Opterons scale much better as you add processors. Here is a pretty good review comparing Opteron and Xeon machines.

Wasn’t there some difference between Xeons and Opterons with respect to the 32 bit backward compatibility? I seem to remember that the Opterons had better functionality in that regard, and that it was a big part of why Opterons were doing so well versus the Xeons.

Maybe even 1 or 2 years ago, CPU choice may have been important for gaming but not today. WIth even the most demanding games out there, by the time you have a graphics card powerful enough to even start being CPU limited, your already performing so well that any increase in performance is negligble.

From my perspective, theres 3 distinct sections to the CPU product curve. Theres almost obsolete processors where price increases very slowly for Mhz increase, then theres the mid-range where price seems to increase fairly linearly, and then theres the high end where prices skyrocket for miniscule differences in performace. My advice for the last 5 years has always to go for the point where low end meets mid range.