AI: the man with two souls

say we develop a computer program that has AI, passes all the tests. you can talk to it and say “darn, its a person”, ect ect ect. to make it even clearer, imagin we just plug a USB port into some guy’s brain and copy the algorithm an actual brain uses. (of course useing high tech beyond star trek techology)

as a program, it must be an algorithm, in real life I imagin it would have to be a really complex billion step one, but imagin I am really really smart. (big imagining there).

since I am really really smart, (and as it turns out in this future world the AI algorithm wasn’t THAT long, only 10,000 lines or so that happen to be really clever) I devote my life to memorizeing it. and do a good job, (plus I tatoo some on my arms, to help me remember the tricky bits). since its just a list of things to do in a predefined order, I can run through it mentally, right? its a rule that any algorithm can be done by humans although for most of them computers are faster at it.

running through the steps in my brain, what am I doing? I am createing an intelegance right?

does it have rights? its a seporate mind than mine, am I morally compelled to continue crunching the algorithm, as stopping would be murder. since its just a copy of the guy’s brain, its got an awareness, like the guy did, so its a real person on its own.

what if I wrote the algorithm down in a book and had diffrent people read it as the diffrent people got bored of it. (I imagin the algorithm is self altering somehow, so they get a pen to cross stuff out and write new stuff when needed) the book would be aware, correct? not just an intelegance, but actually concious and aware. a superior life form even, being immortal, teleportable, and able to reporduce just by recruiting new readers.

does this seem as weird to other people as it does to me? woah… just a strange idea…

methinks the reasoning is flawed, but I can’t put my finger on it. But, if we wrote down the intire DNA code of someone into a book, the book doesn’t magicaly become “aware.” It’s just a list of information.

The same way with that algorithm of yours. Without any dynamics, nothing with which to interact, it’s just a list of information, too.

We (self-aware beings) are not just a string of code. Even a live cell from us, containing the DNA, is not “us.” All together, these things become “us.” Though how, I don’t know.

IMHO, that’s how it would be with AI, too.

well thats why I am saying you have to “run” the code in your head. if the next step in the book is “add five to the number” you do that… you add five.

its not writeing the DNA either, its writeing the brain algorithm. because, unless a soul of some sort exists there MUST by definition be an algorithm that the brain works on. it must be some sort of computer program that can be abstracted to be a list of “do this if this” “do that if you don’t detect doo do doo”

if the brain IS just a computer that runs an awareness program. (no matter what the implementation…since its definitly uncomputer like) then someone running the algorithm on paper would also create the awareness… just very very slow.

if a soul doesn’t exist, and the mind is just a program… why should it matter what its running on? why would it change if its running on a brain or a computer or a stack of graphpaper and a bunch of chinese girls figureing it out from a book?

I’m having trouble suspending my disbelief here. The human brain has trillions of neurons, so any program capable of duplicating human thought is going to have to be trillions of lines long. There’s no way you could memorize it, let alone store its running values in your head.

Ooh, ooh! I get to do this:

:dubious:

(in response to the OP and his follow up…)

Um, what you’d be doing is running a simulation – somewhat more complex than say “Hmmm, what would so and so do in this situation?” What’s more, an intelligence is made up of more than algorithms – there’s also the data associated with that individual (experiences, sensory memories, and so on).

Now your question is not without merit. Consider if you had an artificial intelligence based on something similar to our current computer technology. You could copy that entity quite easily and end up with two almost indistinguishable individuals. Can you then turn one off with impunity? Can you turn both off and argue that as long as you have the backup store than you have done no real harm to that entity? How do you attach ownership rights to an intelligence that you can copy at will? That is, if one copy owns a property and you copy it, which copy is then the owner of that property?

This kind of ethical issue makes today’s right’s management issues look trivial by comparison.

You also completely misunderstand the concept of a neural network. There would be no way to descibe the function of a brain in “lines of code” as it is far too non-linear.

And yet, neural net simulators are run as sequential programs. The number of connections is huge, but all you need is a graph data structure and enough memory, and you’re good.

There’s not really any reason that a mind must be a soul or an algorithm. It could just as easily be an entirely deterministic series of events. Would you say a rock rolling down a hill and falling on a log is an algorithm?

This doesn’t really invalidate your points – there’s still the possibility (rather remote) of a human-run ‘brain simulation’, if only one that operates on a much slower time scale than any human now. Or you could argue at what level consciousness is achieved. What if you created the mind of a bird, or a cat? What about a talking cat? What about a talking cat that really spells ‘dog’?

But more to the OP, on what basis do you claim that ‘ceasing the algorithm’ constitutes murder? I wouldn’t call it murder any more than giving someone a sleeping pill is murder. Willful destruction of the program might be closer to murder, in my view (assuming the program is aware & alive).

The thing to realize is that if such a construct is in fact aware, it’s not in the book alone, but partly in the minds of the people processing it - the state of whatever the algorithm is working on.
If you haven’t read a lot on the subject of AI and brains, I’d recommend The Mind’s I, a collection edited by Daniel C. Dennett and Douglas Hofstadter. Though rather old by now, it includes classics like the Turing test, the Chinese Room, Dawkins’ coining of ‘meme’, and a number of stories especially about the brain as program.

The key word there is simulator. So far we can only approximate how we think a neural net works with a linear processor and programming. Even if we do find better ways to mimic the mechanics of the brain that doesn’t get us to how the mind/soul works. We can make machines that make decsions based on input but can we yet make a machine that can have a true ethical dilemma? If we do how will we know? A turing test is inadeqate as people are very easy to fool.

well first thing: something isn’t algorithmic OR determanistic. an algorithm is just a way to describe something that has more than one way it can go (or even something that can go one way). all algorithms are deterministic.

and wouldn’t you agree that it should in theory at least be able to take a brain and figure out each and every cell and figure out if it is on or off? in practice that would be ALOT of tiny little probes, but in theory there is nothing inherent preventing it.

and knowing the states of on and off of all cells, (call it brain state X) shouldn’t it be possible to know what the next state it will be on will be? (also knowing chemical conditions and whatever else turns out being important?)

if the entire brain is in state X and goes next to state Y, isn’t it reasonable to say (unless a soul exists) that if you put the brain back in state X exactly it will again go to state Y. deterministic to the definition. shouldn’t every setup of the brain (neruons being in one state, chemicals being the same, whatever else) have one and only one ‘next state’?

isn’t the brain just a computer? running complex hardware on complex softwear? no matter how complex it is, church’s thesis says that anything one computer can do can be done by any in some way (with enough memory and enough time, doesn’t REALLY say that exactly but means that in general, basicly any general porpose computer can simulate any other computer)

because of that, if the brain uses the laws of physics as we know it, it SHOULD be simulateable by some high tech computer. and if a program can be written for it to do that, even if it does it by simulateing each and every atom.

it can IN THEORY be run in your head. of course such a program would be billions of lines long and beyond any sort of crazy limit to memorize. of course its practically impossible for someone to memorize it but that doesn’t erase the MEANING of that happening. what if you did write it in a book, and set up a billion people running it. say we got all of india to agree to spend all their time doing the program in their head.

then india, as a country would be aware! it would have a mind identical to a human mind, it would have the same exact level of conciousness and awareness that you have! but the country would have it! the country of india would be aware! (of course it would be really slow, but how does that matter? if I slowed down your mind so you lived 10,000 years with the same amount of thinking wouldn’t you just precive things as going really fast?)

or what if the algorithm really WASN’T that complex, I mean look at a newborn, who must have all the ‘programming’ to eventually become a rocket scientist. a newborn doesn’t do anything all that special, just wiggle alot and poop. mabey the algorithm is only so great because of its library collection skills? what if the whole of the human mind alogrithm WAS really only 10,000 steps or so that is alot of cleverness to manipulate a life’s worth of stored data (and to store more). then I COULD memorize it if I tryed real hard (people memorize the bible after all) and I could slowly recite and run it in my mind, and in running it it WOULD be a human mind, ran inside my mind, wouldn’t it?

if I ran the program on a computer it would be, right? or on a brain? (if a human mind run on a human brain isn’t aware then yeah, I guess I could be wrong)

anyway, weird thing to think about… aye? (and yes I realized from the start all the tons of impracticalitys and impossiblitys of actually doing this, but I also realize that doesn’t take away the weirdness of it, just makes it so no one will ever get a chance to try)

and as for murder, it would be like sleeping pills, if I started again (would be less even, would be like the person ceased to exist and then restarted exactly where they had stopped) but what if I never ever started again, or started over, then wouldn’t it be murder? the termination of a sentient being?

Let’s say we invent little nanomolecular replacements for the human brain cell. Now let’s say that we carve away your brain a bit at a time and replace it with our little computerized brain cells, at any point would ‘you’ stop being ‘you’?

Now assume that as I’m sculpting away your brain I make TWO copies. You stay conscious the whole time, but now there are two of you. Which one has the right to life? Both? What would the experience feel like?

a few things here.

for one, you seem to be mistaking a computer program with a executable binary file. a computer program is a series of commands that tells some interpreter how to run on certain hardware given certain stimuli. an executed binary file is a series of bits, an instantiation of that program, an exact interpretation. that instantiation is a large series of 0s and 1s. so it corresponds to trues and falses. the list of 0s and 1s of a program for the brain would be a bit larger than the program that tells whatever interpreter you use (i use g++). and the properties of the hardware, and what they do with the various 0s and 1s are also very important.

now, bearing that in mind, you should also realize that a program by itself (the code alone) need by no means be deterministic. consider:

int x;
fscanf(some_file, “%d”, x);
if(x > SOME_LARGE_NUMBER)
printf(“that’s too damn big
.”);
else
printf(“weakling!”);

that bit of code could print either. you have no way of knowing which, just by looking at the code. so, now consider the human brain as a computer running an instantiation of some program. the number of files open to that program itself would take up quite a few more than 10k lines (to open, say). so there could be any number of things that happen to brain/computer x given stimulus a.

you said given a brain in state X and a brain in state X once went to state Y, then a brain in state X must always go to state Y. i say that’s not true in the least. a brain in state X must always go to Y given the stimuli it received when it was in state X and went to state Y. that means in order for that to occur, the world must be in exactly the same state it was, or something very tricky must’ve occurred.

now, let’s consider the copied AI. say the AI was in state Q when it was copied. the original received stimulus set S1 and went to state P. in order for copy C to go to state Q, it would have to receive the exact same stimulus set, or some very tricky and improbable thing must occur. i believe neither of those things are likely, so just like twins with the same dna turn into different individuals, those to AIs would no longer be the same thing.

and now let’s think about the man who will try to run an instantiation of the brain program. first of all, following the code isn’t likely to yield the correct results. the interpreter has to be very particular. so let’s pretend that the man actually runs an instantiation of the program in his brain. so he might actually have two brain programs running. they are both receiving the exact same stimuli. also, if a brain is to run another program, that program takes up processing power, so the brain can’t run its normal program to its full capacity. so at least one of the processes will be run in a hindered fashion. unless the second program was an exact copy of the first, in which case, receiving the exact same stimuli, it would peform in the exact same way and would be indistinguishable.

now, if we had two computers, each with the computing power of the brain, and each had a different brain program running, it would be interesting to see how they interacted if they were given full access to each others’ programs and data. but i don’t know how one would go about making that connection. also, if you weren’t already aware, the entire earth (not just india) is a computer, being run by rats that exist as much more complex beings in a different dimension, all trying to find out the great question.

lastly, i wanted to make a bit of a nitpick. an algorithm and a program are two completely different things. an algorithm is a part of a computer program that can be proved to converge to the “correct” output, given proper input. a program makes no such claims, and i very highly doubt that any human brain operates algorithmically.

anyway, sorry for the huge post, but i’ll summarize by saying i think your original post has no grounds to stand on, though it does bring up some interesting points about the nature of consciousness, such as the influence of the medium, and the effect of two consciousness becoming fully aware of each other. as far as murder goes, if taking the life of something conscious (i think it takes much more than awareness to make something human) is immoral, shutting off a computer program, or an exact copy of that program, that is said to be conscious is immoral.

oh, and lastly (i mean it this time), the turing test is the only means we will ever have for proving the intelligence of anything other than ourselves. it’s the same test we apply daily when we see other people and believe that they are intelligent.

Okay, that was a poor choice of words on my part. And the point is understood; as you said there could be a ‘brain simulator’ of each state. What I meant was that there is no requirement that the brain ‘reenters’ states but progresses sucessively from each unique state to the next; which I would distinguish from an algorithm. Since I don’t even subscribe to this theory and it’s fairly unlikely anyway, I’ll just leave it at that.

Minor nitpick : The brain is not a digital computer (neurons & neural pathways are not necessarily fully on or off). It is barely possible, though unprovable, that it is chemically deterministic given identical real-world stimuli.

Given what you’ve said, I strongly urge you to read the short stories : The Story of A Brain by Arnold Zuboff, and A Conversation with Einstein’s Brain by Douglas Hofstadter - which describes exactly the sort of brain-book you’re talking about. The idea of the ‘duplicate brain’ as Sam Stone described is partly addressed in Daniel C. Dennett’s Where Am I?.

I’m really not trying to shamelessly advertise (espeically as I’m not a great fan of the editors) so I won’t reiterate that there is one book that contains all of those stories.

Don’t forget… was it Mind Games? Mind something. Hans Moravec.

now, bearing that in mind, you should also realize that a program by itself (the code alone) need by no means be deterministic. consider:

int x;
fscanf(some_file, “%d”, x);
if(x > SOME_LARGE_NUMBER)
printf(“that’s too damn big
.”);
else
printf(“weakling!”);

but that IS deterministic for any data fed into it, if you know what x is, you can say what it will output with 100% absolute sureness.

and church’s thesis does imply anything that is a universal computer can run any other program (although it says nothing about speed or efficency) one can run windows 98 with a pile of sticks that you use to keep track of the registers! the output would be insaine and would take years to decode, but it WOULD work.

all I am saying is that if a computer can have awareness in any manner then its possible to run it from a book with a stack of graph paper. there exists no program that can be written that can only be run on a computer. no matter WHAT happens on the computer it has a identifyable number of steps, (even if you have to take a microscope and look at the steps the CPU is takeing)

I’m odviously not seriously saying anyone could really litterally run another mind within his mind, but unless souls exist or something crazy is going on a brain is simply a computer that happens to be awsomely more complex than your computer. and it can be shown that any computer can do anything any other computer can, although no amount of similarity in speed is assured. no diffrent hardware (other than amount of memory) can REALLY do anything but speed up processes. you can emulate any computer with any other computer if you have the memory, even if its REALLY REALLY slow. and a pen and a book and some graph paper and a guy, IS a computer.

and if the program is aware, why should it matter what it is run on? if its run in flesh or metal or paper?

its also a well know proven theory of computer science that any computer can simulate any other computer with enough memory, and certainly a digital computer can simulate a analog computer. its just in an analog storeing a ‘5’ takes 1 slot and and a digital computer has to store it in a few slots.

the basic proof that I keep bringing up that comes from church’s thesis in super simple form:

a turing machine is the most powerful computer that is possible ever, no matter what you can never build a computer that can do more than it. no matter what alien technology you use or how smart you get you will never invent something that can calculate anything that a turing machine can not. any other computer can be represented on a turing machine.

since you can model a turing machine on a real world computer with enough memory and a turing machine can emulate any computer imaginable you can model any computer on any other computer (with enough memory, as a turing machine has infinite memory so you can only model a partial one, but since no turing machine USES infinite memory there is always a point of “enough” storeage for any emulation)

err… just trust me on that part… or better yet, look it up yourself, I spend far too little time on these posts sometimes and try to dumb things down without doing a very good job at it at all… so its just me that sounds dumb…

Could you be a bit more clear about what you mean by ‘soul’?

That’s a bit of an over-simplification, don’t you think? Maybe almost to the point that it’s false?

AFAIK there does not exist a process (or even a theory) for translating a massively asynchronous computational system (like the human brain) into a synchronous equivalent (like a digital computer, or even an artificial neural network). (If there is a theory for this, then please let me know where I can find out more on it, as asynchronous computing interests me a great deal.)

It is worth noting that a Turing machine cannot exist IRL, since it fails that whole “infinite memory” requirement. With finite memory (i.e. with a linearly bound automaton) you can’t compute an unrestricted grammar – you’re restricted to context-senstive grammars. So, while a Turing machine may be the most powerful computer possible, it is not the most powerful computer in existence – that award probably goes to the human brain.

Finally, there’s an issue with quantum uncertainty (I don’t think this has been brought up yet, ignore this paragraph if it has) with your “bifurcated brain” hypothetical. It is fundamentally impossible for Brains A and B to be constructed identically (meaning “identical hardware configurations” and “identical quantum states”). While quantum uncertainty has little effect on a digital computer (since it takes significant error for a bit’s condition to change from on to off or vice-versa), a brain is far more chaotic in its workings and much more susceptible to minute error.