how do computers work?

i know i know this is really general and maybe you cant answer it here, but as i understand it computers work by interpreting code and making things out of it. my question is, how can the computer know how to interpret the code? wouldnt it have to be coded to interpret the code? then how would it interpret that code…?

What you really want to know is how the microprocessor works, since that’s the heart of any modern desktop computer. Here is a nice, but lengthy, article on How Stuff Works that should explain things for you.

If you want to find the answer to your question, I recommend a book on x86 assembly programming.

hit the submit button too soon… i own this book:

http://www.amazon.com/exec/obidos/tg/detail/-/0387985301/qid=1098919203/sr=8-4/ref=sr_8_xs_ap_i4_xgl14/102-0817773-2913754?v=glance&s=books&n=507846

It’s a decent introduction. Here’s a web page found during the google search for that book:

http://www.osdata.com/topic/language/asm/asmintro.htm

dunno about the quality of the site, but it’s a start.

Little neon motorbikes ridden by men with frisbies.

And they’re quite drunk.

You don’t need to read a book. Every computer is programmed with a set of instructions, with an operation (called opcode) and a bunch of addresses. Let’s assume that all the data is in memory, which is labeled 0, 1, … , for simplicity. Say you want to add the numbers in locations 2 and 4, and put the answer in location 6. The instruction might look like

Add 6,2,4
where the destination comes first. There is a little register (for storing data) in the machine called the program counter, or PC. Say your instruction is in location 1,000. (Instructions go into memory also.) When the PC hits 1,000, it reads your instruction, and breaks it down into pieces. It sees it is an ADD, and sees an ADD has three addresses. It sends memory requests for locations 2 and 4, and stores them locally in a place you don’t have to worry about. Then it sends a signal to the adder, telling it to take the data from the temporary location and do its thing. It then sends another signal to put the answer in another temporary location. It then writes the result to memory location 6.

The control of all this can either be done by hardware, or by another little program, called a microprogram, running in the machine. Intel X86 processors are still microprogrammed, but not many others. (Itanic is.) So bottom line, the instruction is fed into the part of the machine that executes it, which sends out signals to move data to the right places.

Voyager - former computer architect and micropgrammer.

[deep geek]

First, it’s Itanium. The architecture isn’t beneath the waves completely. :wink:

Second, has the RISC Revolution come so far that frontend ISAs are really driving the chip components directly again? Or are you only referring to microcode that is (theoretically) alterable once the chip has been cast into its final die and sealed into a package?
[/deep geek]

In any case, vinnepaz, any computer is based on the idea of the Turing Machine: A machine that converts an input state (usually represented as a combination of data in RAM, on disk, and inside the CPU itself) to an output state (a different combination of data in all of those places) according to a set of basic rules.

At this level (hardware design), the rules are burned into the silicon used to make the CPU and the other chips inside the computer. That is, they are physically a part of the machine. It’s kind of tough to explain further unless you already understand how transistors work, and how a physical arrangement of atoms can be used to compute a logic function.

You can ask more specific questions, and I hope you do. I hope you also do some extra research on any of the words I’ve used that you don’t understand.

No no no! It’s little purple monkeys holding hands.

The computer has an instruction set. Think of a Chinese takeaway where “No 19” means “Egg fried rice” and “No 73” means “Chow Mein”. This is the kind of interpretation you’re talking about, only it’ll be something like “No 19” means “Add the contents of register A to register B”, “No 73” means “Jump forward by two places”.

Well just as your Chinese takeaway knows how to interpret numbers from 1 to 197 (say), your computer knows how to interpret a large set of numbers into instructions (far more than 197 of them!)

And when your computer “runs”, it executes the instructions one at a time. Think of a cassette tape running past a head, which reads each instruction off the tape, performs the instruction, and then moves the tape forward and does it all again. And again. Very quickly.

They can do only one thing.

That is to add: 1 + 1 = 0

Sounds like the OP doesn’t want a highly technical answer so let me try to streamline this.

Is is coded to interpret the code. The beauty of computers is that they are architected in layers of interpreting code. Using layers helps to manage the complexity. At each lower layer the set of instructions available gets more limited and the instructions know more and more about what’s going on at the lowest levels. There are software layers, then what’s called microcode.

Eventually you get down to where the hardware is interpreting and executing instructions. The hardware at the lowest level just sees 1’s and 0’s expressed as voltage levels. It uses logic to figure out what to do with them all. The logic is implemented by logical switches, which are built from transistors, which are etched onto chips that can now hold hundreds of thousands of them.

Getting just an introduction to this topic is basically a 3-credit course, and that’s if you already took Boolean logic.

Here goes:

** + is OR

  • is AND
    1 is TRUE
    0 is FALSE*****

1 + 1 = 1
1 + 0 = 1
0 + 0 = 0

1 * 1 = 1
1 * 0 = 0
0 * 0 = 1**

Now we know Boolean maths. Substitute True is 5 volts, **False is 0 volts ** figure out how to do the sums with transistors and you can build yourself a 'puter.

*****For the C programmers in the house.

Truth hasn’t been 5 volts for years and years. Every generation true gets closer and closer to false. In fact the chips that I am working on now true is from 1.7 to 2.1 volts depending on how fast you want the truth.

Yeah, but saying a voltage between 1.7 and 2.1 volts is True is a bit clunky (and some of us learned this a few years back using TTL not designing chips).

Extrapolating from your post, computers will reach infinite speed when True = False. And if True = False = zero volts they won’t need any power either. I think you should patent this quick :slight_smile:

Anyhoo, as Pratchett would say. True = False.

For a given value of ‘True’.

Well, now we know what AND and OR mean, anyway.

Then you need to know NOT, deMorgan’s theorem and maybe a dozen other theorems, reductions, and would also be convenient to know NAND, XOR, XNOR, though not strictly necessary. Boolean logic was the first part of a first-year logic course (second part was propositional calculus, IIRC).

A processor is really just a fancy example of something called a “finite state machine.” The machine has a fixed number of “states” that it can be in, and goes from one state to the next based on its current state plus whatever inputs are in the system.

One of the easiest ways to make a state machine is using a ROM, a latch, and a clock. Each address in the ROM plus the inputs of the system form the address of the next state to go to in the ROM, and the latch holds the current state of the machine. The clock is used to drive the latch, which makes the whole thing work.

Old vending machines used to be made using finite state machines because they are cheap and simple. These days they usually use a microprocessor instead.

The basic states that a microprocessor goes through are called instruction fetch (where it fetches the next instruction to be executed from memory) instruction decode (where it goes and fetches all of the operands needed), instruction execute (where it performs the operation desired), and write back (where it stores the result somewhere). A processor like an old 8086 actually does use a ROM and latch type of state machine to do all of this. The ROM is called the “microcode” and just contains the various bit patterns required to go from one state to the next.

In the quest to make faster and faster machines, someone came up with the idea that instead of having one little piece of machinery that had to do several different things, why don’t we just make seperate hardware for each task. That way, we can do an instruction fetch and pass that result down to the next bit of hardware, which does the instruction decode. While the next bit of hardware is doing the instruction decode, the first bit can do another instruction fetch. This is called “pipelining” and is much faster than the finite state machine approach used in older processors.

A modern processor like a Pentium is a lot more complicated than this, but that’s the basic idea of how the machine “thinks.”

You are off by a couple orders of magnitude here. Modern CPU’s have already passed the 100 million transistor mark.

That doesn’t really answer the question, because he would want to know how the processor knows how to execute the opcodes. A lengthy explanation about electrical logic gates is in order but I think links for that have already been provided.