SingleDad asks if a computer can be made arbitrarily complex without becoming sentient. I’ll answer yes, with some definitions of how the computer is defined.
If we take an AND gate, it’s pretty obvious that it’s not sentient. It’s not merely not all the way there, but it’s an absolute zero on the sentience meter. Throwing a trillion AND gates into a box wouldn’t make it a trillion times more sentient because anything times zero is zero.
So complexity comes in the wiring. You could take the same AND gate and use miles of wire connecting it to itself, it still would be no closer to sentience.
Not all complexities increase the usefulness of the system in all ways.
Next, assume a huge conglomeration of logic gates, connected (suprise) identically like a Pentium III processor. Sentience level, zero.
Now take a million of these and wire them in parallel, so that all incoming data goes identically to all processors and all output is voted on.
You’ve got a machine more than a million times more complex, but with absolutely no more processing power. Sure, it could withstand 999,999 processor failures, but it wouldn’t do any calculation faster or better, and if the chip was flawed, all would make identical mistakes.
Take a theoretical processor that’s identical to the P3, yet a trillion times faster. It’s obvious that it’s going to do the exact same thing the pentium 3 will do, just faster. Sentience level, zero, but more quickly.
Similar increases in complexity will not yeild any different behaviour, it’ll just be faster or more redundant.
I propose that there’s no difference between a general purpose computer with a 1Mhz processor, and one with a processor so fast the speed can’t be measured. They’ll both run the same programs, one will just be finished faster.
Hardware is nearly irrelevant, except for speed. What does matter is the program.
A simple program that takes digital input (not binary, just digital, easily quanitified input) will always produce the same answers, and if written properly, will do so on all computers, if given the same input.
Starting from a known state, with known input, and a known program, you’ll get the same results.
This doesn’t mean, to me, that computers can never be sentient, it just means that at some level, they’ll be predictable. If you back up an AI, record their input, and restore the backup, to the exact point taken, then start playing the input, you should get identical answers.
I think Turing proved that all computers are essentially compatible, and that beyond storage limitations, any problem you could solve on one could be solved on another, though it might take so long as to be irrelevant.
Thus, restoring the AI onto a different computer, as long as the program was in a format the computer could run, and the same inputs were available, etc, would result in the same ouput (ie, thoughts, actions, etc).
If the information the AI receives is sufficiently complex, you’ll get a butterfly effect, where the chaos you can’t predict in the input leads to the machine being in a different state than you would have guessed, because any sufficiently complex view of the world is somewhat chaotic. At this point, the fastest way to predict what the AI will do is to run the AI.
But, not all chaotic functions are AIs. A mandelbrot calculation, or photoshop filter may be so sensitive to input conditions that the output can never be duplicated by feeding the same input into an analog input device.
But, I think I’ve shown my reasons for believing that hardware complexity, as long as the hardware is general purpose and requires a program, is irrelevant. It’ll just sit and wait faster, for the same program to be run. Software complexity, in the same way, is irrelevant. A 10-line program to add two numbers isn’t an AI. I could write a thousand-line program to do the same thing and it wouldn’t be an AI either, I could write a program to generate an arbitrarily long program for the sole purpose of adding two numbers. Neither program would be an AI.
An AI would have to be able to make decisions based on input, and stored ‘memories’. I don’t know what form this program would take (or I’d write it myself) but it would depend on the design of the program, not merely the arbitrary complexity of the hardware, or the software.
As a side note to the guy who said inorganic life wouldn’t be life…
What if we eventually perfectly map a human brain, and then build a copy, but with silicon neurons instead of flesh ones. If we initialize it to the same state the human was in when we copied the brain, shouldn’t the same thoughts occur? Is there a special property of flesh? What is it? Why don’t you believe we’ll ever duplicate it?
As a comment about the idea that a machine will never be able to duplicate itself, without a human supported infrastructure…
This idea is supported by the example of a universal fabricator. The machine consists of the fabricator, a body (whatever is required to support the fabricator) and a book of blueprints. The machine reads the blueprints, takes raw materials, feeds them into the fabricator, and takes the resulting identical machine. It then refers to a second smaller book of blueprints, which it uses to create the first book, and a third smaller one, to create the second, ad infinatum, because every step requires more instructions, which can only be duplicated with another step.
The problem with this argument is that it assumes the blueprints can only be read as fabricator instruction. If the blueprints were printed text, that the machine scanned for instructions, then they could contain the final instructions to build a photocopier and copy the instruction booklet, as long as the machine could hold enough instructions in its ‘head’ to be without the book for a few moments. This copy of the book would them be affixed to the new machine.
The problem with the first argument is that it assumes all instructions have to be dealt with in the same way, that you’d have to ‘fabricate’ the instruction booklet which would require more instructions, etc.
If you designed the machine properly, it could even use advanced error checking to reduce the possibility of printing errors by reading the document and printing it again, not merely photocopying.