Had Intel not run into the problem of not being able to trademark a number, what would the current generation Pentium 4 chips be known as? 886?
I do not believe so… i see your point withthe chip numbers… but the numbers are just product code (to the best of my knowledge)
There were a fe 686’s made after the first pentiums … they were no good however … anyway … this would also make the 686 and the Pent 2 the same chip which they clearly weren’t.
Pentium 4 would be “i786”. Instead, Intel is calling the P4 the Netburst™ Architecture.
And yes, Pentium IIs were “i686” (P6) architecture. The original Pentium was called P5 architecture, and its high end/server variant, the Pentium Pro, was called P6. The Pentium II was simply a Pentium Pro with the addition of the MMX instruction set. The Pentium III was a Pentium II with the addition of the SSE (Streaming SIMD Extensions) instruction set. The Pentium 4 was Intel’s first new architecture since the introduction of the Pentium Pro back in (IIRC) 1995.
The Intel Tualatin512 processor, the last of the Pentium IIIs, was actually a VERY good CPU. A 1.4Ghz Tualatin512 could easily best a 1.6Ghz Pentium 4 in all tasks, and could compete quite well against a 1.4Ghz AMD Athlon processor. Unfortunately, they were made in small numbers and sold for prices far higher than they warranted.
FDISK, are there important differences between the Netburst architecture and the x86 line? I know the x86 line is backwards-compatible all the way back to the 8086, even to the absurd extent of maintaining segmented memory, but can the same be said of Netburst?
(Needless to say, I’ve never heard of Netburst.)
Netburst is completely backwards compatible. However, internally the design is fundamentally different. Modern processors such as the Intel Pentium 4 and AMD Athlon are at their core RISC processors, which means they use many small operations instead of large, clumsy ones. This means that standard x86 instructions need to be translated into instructions that the processor can execute, this is the job of the Instruction Decoder unit. This adds some overhead, but the performance increase during execution more than makes up for it. This is similar to running a program in an emulator, except that the emulation is done in hardware, and there’s no way to talk to the processor natively.
Sorry for the silly question, but how does this differ from microcode?
I believe it’s the same. The decoder translates x86 instructions into microcode, which is executed by the core.
On the P4, the instruction cache (“trace cache”) actually stores microcode instead of x86 instructions, so commonly executed code only has to be decoded once.
I’m not exactly sure. I would guess that microcode is what the x86 operations are decoded into, and this is supported by the fact that the decoded operations are called micro-ops, or uOps for short. But I could be wrong. You can find more information on processor designs at arstechnica.com. They have some good articles on the inner workings of the instruction decoders.
On preview: Looks like Mr2001 agrees with that. Thanks for the clarification.
FDISK, it’s a pity it’s impossible to talk to the RISC core directly. It should be possible to have a formerly unassigned opcode set a flag that determines whether the instructions are sent through the decoder or directly to the core. Is this not feasible, or just not done?
I program MIPS chips in emulation on the spim software emulator, and I find the orthogonal instruction set and the general regularity of design freeing.
My guess is that while possible, it would not be beneficial. Not being constrained by the existance of any native code at all is probably better from a design standpoint than having to deal with two set-in-stone ISAs. I know that Intel was planning to deal with the problem by actually putting two CPUs on one die, one x86 and one IA64. Cost constraints led to them cutting out the x86 core, producing the Itanium processor we all know and love to hate. AMD will be doing something similar with their Opteron and Athlon 64 line of processors, which will use a special flag to access 64bit “long” mode.
Mr2001, FDISK, Derleth, thanks for the answers to the hijack.
That’s what they want you to believe. Traditional RISC processors do not need to decode the instructions, they are directly executed by random logic.
AMD carried the numbering system a little further which could make some comparison to P’s.
The early athlon was initially called a k7 IIRC (786?)
Urban Ranger: And a traditional CISC processor would directly execute the macro-Ops with no instruction decoder either:) The point is, behind the instruction decoder, it’s a high performance RISC processor (not that RISC necessarily means high performance…) executing micro-Ops.
k2dave: The codename of the AMD Athlon processor was K7, because it replaced the AMD K6 processor. AMD’s K5 and K6 processors had given it a (deserved) reputation for low performance, so AMD decided to break with its naming scheme and come up with something new. Athlon was the result.
RISC chips were designed to be programmed by compilers, too, not humans. They use delayed branches and pipelining and other tricks to speed execution that make it nearly impossible for a human to actually program a modern RISC chip directly. MIPS chips are actually programmed in a kind of pseudocode the assembler translates into something the chip can actually use (sometimes this means taking pseudoops and translating them into sequences of more primitive opcodes, which is basically what is done with opcodes the chip translates into microcode).
RISC, in case anyone’s wondering, means `Reduced Instruction Set Computer.’ That does not necessarily mean it has fewer opcodes, just that the opcodes are designed to be all the same length (to ease chip design) and orthogonal (meaning any opcode can use any register, as opposed to having to fiddle with accumulators and such, something that simplifies compiler design). RISC chips include damn near all modern CPUs, excluding Intel chips before the Pentium 4.
RISC chips were designed after computers became fast enough to regularly compile code written in a high-level language, as opposed to being mainly programmed in either assembly or binary. They came after a rather obivous discovery: Complex opcodes are damn slow, and can usually be replaced by a string of simpler opcodes to speed the program. In a RISC chip, every opcode is ideally a `simple’ opcode, so the compiler doesn’t have to decide which of the possible ways to do something is the best.
(Why were CISC machines so Complex? Well, computer designers knew their computers would be programmed by humans, and humans are bad at thinking in assembly. So computer designers tried to bridge the `semantic gap’ between high-level languages, like FORTRAN, and machine language by implementing opcodes to mimic the function of high-level syntax.)
RISC chips were preceded by, you guessed it, CISC chips. CISC means `Complex Instruction Set Computer,’ but, again, that has little to do with how many opcodes the CPU actually has. The Intel 4004, for example, only has 40 opcodes, and yet it is a CISC chip. (It’s also, incidentally, the first modern CPU, in that all the logic is on one integrated circuit. In a stellar display of overengineering, it was designed to be the CPU for a pocket calculator.) The i4004 has an accumulator register, which is involved in all addition and subtraction opcodes (ADD r1 is r1 = A + r1, for example, where A is the accumulator and r1 is a normal (four-bit) register), and opcodes have varying sizes. CISC writ small, but CISC just the same.
In general, the era of CISC died when people switched to C for systems programming, something popularised by UNIX. Once a machine has to write your code, all of those tricks humans can used to make CISC machines jump become goddamned pains to design into compilers.
Another big difference between x86 and many RISC architectures is the tiny register file. An x86 only has 8 general-purpose registers, or 32 bytes. Contrast this to the SPARC and Alpha, which each have 31 usable registers (124 and 248 bytes, respectively). The x86 has to access memory more often, so it relies on the cache to make this efficient.
Yet another difference is that RISC chips usually have dedicated instructions for working with memory. You load data from memory into a register, then do an operation, then store it back into memory: load r1, [r2+10] ; add r1, 123 ; store r1, [r2+10]. On the x86, however, most instructions can work with memory operands directly: add [eax+10], 123. As a result, decoding and executing instructions is more complicated.
The cpuid instruction returns a Family ID, which (at least up until the Pentium 4) have corresponded to the number in the inofficial “?86” architecture name. Not all 486 processors support cpuid, but those that do have the Family ID 4. Pentium processors have the Family ID 5. Pentium Pro, Pentium II, and Pentium III processors all have Family ID 6.
Pentium 4, however, have Family ID 15. Maybe we should call the Pentium 4 for an F86 processor. Sure sounds better than NetBurst to me.
So the myth is true, Emdedded acronyms actually exist.
…As do typos. Embedded, Embedded, Embedded… :rolleyes:
ForbiddenFruitsalad, great username and wonderfully geeky suggestion.