AI: the man with two souls

that’s the thing, for an intelligent thing such as a person, you know what x is probably about 1 in 10^300 times. that’s a made up figure, don’t bother asking me to cite it, but think of all the input that a human brain processes, and think of how much of that input you know about. the example was given to show that memorizing a program and following through it is not the same as intelligence.

that is not to say that the entire earth could not be a computer, as i facetiously noted. i don’t have the same “intuition” that searle claims must be so obvious that a pile of sticks and strings could not be intelligent (interesting note, the first “computer programmer”, lord byron’s daughter, designed programs for a similar machine). i wonder, though, just how much computing speed has to do with awareness. i wonder how aware we would feel if we had the power to see every single step as it took place.

so, a lot has been said about parallelism. indeed, a brain is not a digital computer. it is MASSIVELY parallel. a computer is much faster, but has but one CPU. here’s a bit of a comparison, from a 1995 text on AI by russell and norvig:

computer vs. brain
computational units: 1 CPU, 10^5 gates vs. 10^11 neurons
storage units: 10^9 bits RAM, 10^10 disk space vs. 10^11 neurons, 10^14 synapses
cycle time: 10^-8 s vs. 10^-3 s
bandwidth: 10^9 bits/s, 10^14 bits/s
neuron updates/s: 10^5 vs. 10^14

so, computers have changed a bit since 1995, so you might end up getting 10^6 neuron updates/s, but that’s still a factor of 10^8 away from the brain. that is, the brain can fire 100,000,000 times as many neurons as a digital computer (as it currently stands) in a given time period. how many paralleled CPUs would we need to match that? also note that the brain might be quite a bit more powerful than that, since it operates chemically as well as electrically, and the above sort of assumes it operates like a NN.

that is an awesome computing power. and nothing we know about consciousness indicates that we don’t need that power. then again, nothing indicates that we do need it. i think that’s an interesting debate.

also of interest could be the possibility that massively parallel computation might be necessary for consciousness. the only conscious things we know (ourselves) seem to operate in a massively parallel fashion, but that doesn’t mean that a digital computer could not operate serially and be conscious.

certainly not false! silly and inpractical but not false.

anything that can be written can be modeled as a turing machine. any general computer can have a program written that can model a turing machine. a turing machine has infinite memory, but no specific one USES infinite memory.

since you can model any computer useing a turing machine and can model a turing machine with any computer (if you have enough memory of course, which is always less than infinite much for any specific turing machine) you can model any computer with any computer, in theory.

thats not a practical thing of course, try writeing a turing machine, its hard, its huge, its slow. it would be practically impossible to turn anything but a very trivial program into one, however it is absolutely 100% doable in a “its not impossible” sort of way.

Finite state automatons can’t simulate Turing machines, and a Turing machine can’t simulate a Turing machine with an oracle for the halting problem.

Above and beyond that, there are (probably) models of computation which are not polynomial-time reducible to a Turing machine. The brain may well have this property.

well of course there is. its not difficult at all (in theory, not in implementation) to model a processor in software. if you have a computer running on 10 processors, you can take an apple IIGS with a large hard drive and write a program in basic that replicates the logic gates of all 10 processors and the operations of such. it would be slow, it would be worthless.

I think people are misunderstanding me. it is definitly true that anything any computer does can be done on any other computer. people are takeing that too literally. I am not saying you can take quake III off your 5Ghz intel computer and start playing it on a graphing calculator.

it is however true that quake III is a program, no matter how complex it can do nothing more or less than invoke processor calls. it can add, it can move data, and thats about it. no matter how complex the game of quake III gets it is still broken down and run on a computer useing simple operations. therefore any other computer can ‘run’ quake III. one can not play quake III on any computer by any means, but one could IN THEORY translate it to run on any computer that exists.

people are takeing my arguement as too strong! its pretty odvious if you think about it that the steps of any computer can be followed by any computer (in some form, translated somehow, or an emulator built) it can not be done so practically, or probobly in the span of a human life, the run on graph paper would be worthless beyond words to actually do. but impossiblity doesn’t come in at all.

a turing machine can do anything any computer can do. any nontrivial programable computer can simulate a turing machine. any specific turing machine takes less than infinite space (at least any one done to emulate the action of another computer). so odviously any computer can do anything any other computer has (if it has the memory). odviously no one could EVER translate a real programming into a turing machine program unless the program was fairly trivial, but it IS POSSIBLE. in THEORY. it can’t be done because its silly and hard, but it CAN be done.

Asynchronous computation is fairly different from synchronous computation. Without a doubt it can be modeled on a synchronous computer, but it’s not as trivial as you seem to think it is.

owlofcreamcheese, are you a computer science student?

speed isn’t the only difference between serial and parallel systems. in parallel systems, the nodes can operate independently of other nodes. in the human brain, 10^14 neurons can fire at the exact same time! it is an entirely different architecture from a digital computer, and a digital computer, thought it can simulate a parallel system, can’t operate in a massively parallel manner. at least not with one processor. one processor can model several, yes, but not in parallel. on a given path, all the gates must be reached in serial.

also, i’ve had the pleasure of writing ANN classes before, and every time i did, i updated each node serially. i suppose i could’ve spawned a thread or another process for each node, but they all go to the same digital processor. also, in order to maintain data integrity, i’d have to restrict all data to being accessed by only one node at a time. so it is by no means a massively parallel system, though it digitally simulates one.

This process works if the ten processors are running synchronously – this could be modelled using one processor running ten threads, with control cycling between the threads after each previous thread’s instruction executes. However, if the ten processors are running asynchronously then it’s another beast altogether – you can’t simply use one processor cycling through ten threads, because at any given moment you can’t say which thread takes control next (since simply determining the time at which the thread fires next becomes an intractable problem in and of itself). It wouldn’t work to simply reduce the time-step, either, because you cannot emulate a time-step that is small enough to perfectly model asynchronous behavior (e.g. if you use a time-step of 1 picosecond, then you’re necessarily ignoring time fluctuations that are less than 1 picosecond in length, meaning that you’ve got error that’s going to just keep propogating through your system).

If time and physical location (and various other attributes of our physical existence) of atoms, electrons, molecules, etc. were discrete and not continuous, then our brains and computers are probably (at least it seems logical) just “things” that exist according to the following rule:

An entity with x number of states, such that given state x1 and input y the result will be state x2.

Which means a computer could indeed simulate a brain,
but only if the condition of discreteness is true.

impossible and way to hard are very diffrent.

you can model 500 processors running asynchronously on one processor by modeling a whole room containing the processors, atom by atom. now thats rediculous, no one is saying you could actually write such a program, but its not strictly impossible. just ‘undoable’. nearly everyone that has argued with me has confused impossible with undoable.

of course running such a simulation would take tons of memory, and would run beyond slow. and would be impractical to do any calculations whatsoever, but thats not the same as impossible, just worthless.

(and you wouldn’t even need to model it atom by atom, since wires act in a predictable way… but no matter what if you modeled atom by atom it would come out right)

and all this “not as trivial as you think” stuff people are saying, oy, your missing the point. its not really that I think anyone would or COULD ever do that (memorize such a program for example) just that no law of physics is preventing it, per se, and its sort of a “wouldn’t it be weird if someone DID do that”. I am actting like alot of things are trivial, such as “downloading the brain program” odviously that would be a massive feat.

when I say something ‘can be done’ I am saying it in a “the laws of nature don’t prevent it” sort of way, not in a “its something that we can do” or even a “its something that could easly be done” or even “its something that all the human race working for 100 years would be able to do”. its like saying “it’s possible to attach rockets to the moon and fly it into orbit around mars” it wouldn’t be easy, it would be an insaine project of no worth, it would take really crazy to implement, and could even take more resources than exist on earth, but its possible in a “no law of physics prevents that from happening” sort of way.

and the other thing people confuse is a program running and a program being useful. if you wrote a program that was a clock, then ran it on a slower computer so it ran slower, (say you did it relitive to processor speed) it would run on the computer you wrote it on, AND on the computer that was slower, it just would be a pretty valueless program on the slower computer, it would give no useable information and would be of no use to anyone for any reason, but it wouldn’t say “sorry, can’t run on this computer” it would run, just be a real pointless creation. mabey a brain is time sensitive and slowing it down or speeding it up WOULD be like running a clock faster or slower, and would wreck it, but I can’t see how… feels like if I ran my brain twice as fast it would just seem to me as if everything was going at half speed…

or… something…

anyone that wants to reply, think first “do I mean its impossible or do I mean its undoable”

Quantum uncertainty, which I’d previously brought up, makes this task physically impossible. Not “unlikely” or “intractable,” but impossible.

i do understand the difference between discussing impractical things and things that are impossible even in theory.

massively parallel computation on a single processor is impossible even in theory.

no processor yet designed by man can make 10^14 boolean (or other kinds of) decisions at the exact same time.

you could not perform that task on a serial processor. ever.