What is the difference between an Apple and an Intel processor

I don’t want this to turn into a Mac vs PC debate i am just interested in the difference in the way they process things. I think I read that Apple used a RISC processor and that PC used a CISC processor, but Apple’s processors have changed so much I don’t know if that is valid anymore.

 Why do Macs have processor speeds of 450 MHz while new Intel processors have speeds of about 900MHz?  

  I have noticed that Macs tend to crash by simply locking up alot (thouh I have heard that with OS X this is not a problem.).  Just restarting a Macintosh seems to correct the problem, but  PC's seem to develop internal conflicts that cause more persistent problems.  I have a freind who has had to wipe his PC hard drive clean and reinstall everything, while I could never imagine having to do that on a Mac (Maybe I have just been lucky.)

  Are there specific tasks which one processor is more suited to than another?  It seems all of my scientific instrumentation runs on PC's while all of the multi-media applications tend to be on Mac's.  Perhapse that is just because of who Apple is marketing to.

 There maybe other differences which I have not noticed so please enlighten me.

Because Mac processors are slower. A G4 is probably faster than a Pentium III at the same clock speed (and certainly faster than a Pentium 4 at the same clock speed), but not twice as fast.

BTW, you can get AMD/Intel chips up to around 1.5 GHz now.

An Intel processor will not keep the doctor away. :stuck_out_tongue:

Actually, try 2.2 Gigahertz, and possibly faster than that now, with the .13 micron manufacturing process that’s been adopted by Intel, and soon to be adopted by AMD. Vroom, vroom!

The short, non-technical answer:

  • Apple Macintoshes currently use PowerPC processors from Motorola and IBM. These are RISC processors, which stands for “Reduced instruction Set CPU” (IIRC).

  • Intel Pentiums and the ilk are CISC processors, which stands for “Complex Instruction Set CPU”.

  • The difference is one of processor design: do you have a processor that does a handful of things quickly and efficiently (RISC), or a processor that does a lot of different things in a less-than-optimal manner (CISC).

  • For instance: suppose I want to add two numbers and store the results in memory. A RISC processor could do that as

  1. Load(X)
  2. Add(Y)
  3. StoreResultInMemory(Z)

Whereas a CISC processor could do that as

  1. AddTwoNumbersAndStoreInMemory(X, Y, Z)
  • The advantage of a RISC processor is that, because it only has a (relatively) small number of operations, the processor can be designed so that it does everything quickly and efficiently. These tricks require all of the processor’s commands to be (somewhat) consistent in size and execution time; as a result, they are not available for CISC processors because it’s difficult/impossible to implement those tricks across all of the CISC processor’s commands.

  • And though it’s not directly related to your question, a computer’s clock speed (MHz) is not a good measure of computer performance across different types. An 800 MHz Pentium (Intel) is not faster than a 500 MHz PowerPC (Mac), because the PowerPC’s RISC-derived efficiency lets it do more in less time. Using MHz to measure a computer’s performance is like using RPM to determine the value of an automobile engine – it’s meaningless. Horsepower is a much better measurement of car engines, but there is no equivalent value in the computer world.

Apple has a more detailed discussion of RISC/CISC differences and the “megahertz myth” at this link. Yeah, it’s slanted to favor their stuff, but the underlying information is accurate.

It is difficult to compare a RISC machine (Mac) to a CISC machine (PC). They are simply different creatures. Long ago I once saw a chart that equated the speed of a RISC processor to the speed of a CISC processor but I haven’t seen it in awhile as I think todays systems and applications make such comparisons almost meaningless.

On the face of it a RISC processor is faster than a CISC processor as their names imply (Reduced Instruction Set Computing vs. Complex Instruction Set Computing). A RISC processor has fewer internal commands that control how the processor behaves. As a result the processor is cheaper to make and faster in its execution of instructions (it simply has fewer to worry about). On the other hand the software written for such a machine has to be a bit more complex as it has to handle some coding that a CISC processor might otherwise manage on its own.

In the end there is a reason some uses tend to use one machine while other uses opt for the other type. Faster instruction execution (RISC) benefits engineering and graphics apps the most. RISC processors also tend to be better in parallel computing situations.

I don’t know the engineering reasons why a RISC processor doesn’t scale to higher clock speeds as well as a CISC processor. I believe there are some fundamental physical issues that hold RISC processors back although some of it may also have to do with simple R&D investments into improving the technology (CISC gets the lion’s share of that pie). Still, in some conditions a RISC processor can outperform a higher clocked CISC processor. Heck, you even see this among different CISC processors. AMD processors perform faster than similarly clocked Intel processors almost across the board. Just goes to show how hard some of this can be to nail down.

Wow, it’s hard to know where to start here…

Apple’s processors are (to simplify a complex history) designed and fabricated by Motorola and/or IBM. Despite what many feel is a more elegant instruction set, they have not kept up with Intel processors in clock speed largely because of scaling issues. Intel sells many more processors and devotes a lot more research into improving fabrication than Motorola does. Further, Motorola just isn’t as dedicated to the processor market (they make their money from embedded systems and telecommunications). Still, the current figures are something like 1GHZ+ for the PowerPC chip and 2.2 GHZ + for the Intel chip. This difference isn’t as significant as it may seem for a lot of reasons. One of which is that Intel has achieved some of their clock speed improvement by making architectural changes (to the pipeline, I believe) so that the increased clock speed doesn’t actually scale to equivalent increased performance. However the selling point of 1.x GHZ processors is undeniable.

Both processors have special circuitry to handle vector-type processing that occurs frequently in certain applications (e.g. graphics and 3D rendering). Again, the concensus is that the PowerPC Altivec is probably superior. In general, for applications that make use of these vector operations, the PowerPC will be faster. (But on both processors you need applications that have been programmed to use these vector instructions.)

The PowerPC has (or at least had) an edge in size and power consumption as well. Intel may have made some strides since the last time I paid attention to the issue, but their processors were typically MUCH more power hungry and ran much hotter than PowerPC chips. This makes Apple laptops run a bit longer and some of their processors have even been designed without fans.
Your question about locking up and “internal conflicts” are unrelated to the processor and are more due to choices made in designing the operating systems. Up until OS-X, Apple was using an operating system that was designed almost 20 years. This operating system did not incorporate support for preemptive multitasking and (probably) some types of processor interrupts necessary to trap memory errors. So poorly designed code (Internet Explorer, cough, cough) will tend to generate faults that will tread all over memory, leading to freezes and crashes. OS X is built on Unix and has full memory protection. Not impossible to get it to crash, but you have to put your back into it.

Similarly, Microsoft’s operating system has its own design choices that occasionally lead to disaster. They use a central registry for storing data about programs. The problems with this tend to be that the registry can get corrupted and that program components tend not to be portable – you can’t just copy a program directory and expect it to work. This leads to the uninstall/reinstall cycle that all Windows users are familiar with.

Actually, the distinction between RISC and CISC is now a matter of historical interest rather than actual interest. Most processors on the market today are neither true CISC nor true RISC processors.
As an example, the core of an Intel Pentium IV includes a number of CISC operations, but also has some portions that translate those CISC operations into RISC-compatible operations and then performs them as RISC operations in a quest for higher speed.
AMD introduced this kind of technology (at least in x86 land) and referred to it as RISC86 when they introduced it in the AMD K6 line.

Even though for certain applications one system may be more suited than another I think you’re dead on the money. Mostly, you run musical applications and graphical apps on Macs, and scientific apps on PCs because of marketing and personality factors, not because of RISC vs CISC.
The only situation where a modern machine really makes the difference are in extremely high-end applications such as using a cluster of Sillicon Graphics graphics servers to do AV rendering, or getting a Cray to do a complicated cryptographic operation or simulate a nuclear explosion at the atomic level.

It might be interesting for you to take a gander at http://n0cgi.distributed.net/speed/
This URL has a bunch of comparative speeds that a wide variety of different CPUs perform at in a variety of different tasks, some encryption, some scientific. Not all of the code is neccesarily perfectly optimized, and it should not be used to make significant purchasing decisions, but it is interesting to look at. Some CPUs do perform much better than you would imagine for their MHZ.

We all live in a yellow submarine, a yellow submarine…

If you have a bit of technical knowledge about computers, an excellent comparison of architecture can be found at Ars Technica ( http://www.arstechnica.com ). This recent article compares the P4 (of Intel) to the G4e(of Motorola). They don’t try to say which is better, they are comparing them to show what’s new and different. They also had another excellent comparison of the Athlon to the G4 a while back.

FWIW, present top of the line Macintosh G4s have 1GHz chips.

My iMac has a 400MHz chip, but this model is about two and a half years old.

The Motorola and/or IBM processors that go into Macs have historically sometimes been faster than the Intel and/or Intel clones that were going into PCs of the same vintage, and at other times have been trailing behind.

When the G3 was new, it kicked Intel ass thoroughly enough to prompt an ad campaign that featured scorched Intel bunnies, a parody of an Intel Inside ad that featured bunnies.

Now we who are of the Mac persuasion are seeing our platform getting its ass kicked by faster PC processors. I think this is largely because AMD scared the bejeezus out of Intel and they dumped their assets into speed development trying to stay at least within reaching distance of the Athlon, and AMD had to stay ahead of Intel to compete effectively so it did likewise.

People who know more about chips and instruction sets than I do say that this is nothing short of a miracle, but economics of scale produce miracles. They still say the PowerPC architecture should scale better in the long run, and that pushing around the vintage x86 instruction set as native hardware code is a massive handicap.

(They say: The modern Pentium and Athlon chips are CISC only in emulation. They are phenomenally efficient pseudo-RISC chips much like PowerPC chips are, but the stuff they deal with is RISC-like decoded x86 instructions that were decoded earlier in the pipeline. The OS and application code is still compiled for the x86 instruction set and the chip itself has to translate it on-the-fly and rip it apart into smaller equally-sized segments of RISC-type persuasion before it can tend to them. One wonders what Intel could do with a raw RISC instruction set. Applause for Intel. Those folks do some incredible stuff with chips).

Meanwhile, most CPU’s sit around twiddling their virtual thumbs waiting for some instructions to come their way, because there are other bottlenecks keeping CPUs from being fed as fast as they can eat. For the overwhelming majority of things you do on a computer, a hypothetical 8 GHz processor would not make anywhere near as much difference as you might think. Definitely not as much as doubling the system bus, or doubling the seek and read and write speeds of your drive. Not too long ago, I upgraded my processor from a 300 G3 to a 500 G3. Much more recently, I swapped out my Toshiba 18G for an IBM TravelStar 60, which is a very fast laptop hard drive. I got much more of a noticeable performance increase from the hard drive upgrade.

The faster RAM architectures that manufacturers (mainly PC, I’m afraid) have been playing with should also result in increased ability to feed the CPU chip. (In all fairness to the Mac, I think the fast L1, L2, and L3 caches have generally been superior to their PC equivalents which is part of the reason we get more bang for each MHz of CPU, and faster RAM will not eliminate that disparity by itself).

People with a greater understanding or more current data are welcome to correct me here on anything I’ve said.

That’s a myth generated by Intel. In actuality, the difference between RISC (reduced instruction set computer) and CISC (complex instruction set computer) machines are very significant. CISC machines rely on increasing the clock speed to increase their performance because instructions take multiple clock cycles to finish.

The most telltaling sign now is Intel’s IA-64 machine. It looks quite RISC-like to me. That seems that even Intel is trying to abandon the CISC architecture. Their attempt is a bold one, but the step is a little too far.

What about the PowerPC and Sun’s UltraSPARC?

Isn’t it the case that a slower chip (in terms of RPM not actual processing) creates less heat, which is why Apple have been able to make fan-free and silent computers?

If so, how much of a difference does it really make?

If you had a 300MHZ imac and a 650MHZ imac - both in the original style with same casing etc - would you be able to appreciate the difference in heat they gave off?

That depends. Suppose that there is no change in the manufacturing process of both CPU, I reckon there is a difference in the amount of heat they generate, but it is hard to tell how big the difference is.

As a wild-assed guess/answer attempt, what are the phtysical sizes of the PowerPC vs. the Intel Pentiums? I think the Pentiums are larger to accomodate the extra silicon needed for the various CISC instructions, which would account for the greater heat…?

I did a bit of poking around the net and found some typical power dissipation figures – these are pretty approximate because I didn’t have the patience to do a thorough search.

It looks like the PowerPC chips range from 5 to 15 Watts over a range of 400 to 700 Mhz with some lower power chips in there as well for laptops. The G5 chip looks like it will pull at least 20 Watts at 1GHz. This is with a silicon on insulation .18 micron fabrication.

A 1.5 GHz Pentium 4 pulls 52 Watts.

Clockspeed plays only a part in heat dissipation. Part of it is chip size (and Intel chips are enormous – lots and lots of transistors because of their architecture). Part of it is due to fabrication technology. They’ve also done a lot to get the voltages required for the chips down which results in a cooler running chip.

So the answer regarding heat dissipation isn’t as simple as clock speed, but yes, you would notice the heat difference between a 300 Mhz and a 600 Mhz chip.

Oops, it’s worse than I thought. According to this article (http://www.inqst.com/articles/p4bandwidth/p4bandwidth.htm) which is not very complimentary to the P4, Intel stretched the truth a lot when spec’ing the P4’s wattage requirements. Apparently the true wattage requirements are more like 73 Watts (think Easy Bake Oven) and if you exceed 55 Watts for any appreciable amount of time, the chip throttles back automatically to 1/2 the clock speed to cool off.

Note that this is a market research report from a year ago, so doesn’t reflect Intel’s current specs.