You have done a good job of describing the two “schools of though” (har har) on this topic. I happen to ascribe to #1, because I agree with you that #2 introduces all sorts of bizzare paradoxes.
As mentioned: Lets say #2 is valid and that any given “I” results from a particular state of brain function. What happens then if my exact brain state is duplicated? How can two beings possibly experience the same “I”? This would imply the existence of some sort of spooky-action-at-a-distance brain-photon entanglement.
Your “I” is a function of precisely your experiences right now. The only way for beings A & B to have the identical “I’s” would be for them to exist at the same point in the space-time continuum. This is physically impossible. The “I” seeing through person A could not possibly also see through person B. Thus, person A’s “I” will be different than person B’s “I”. We now conclude #2 is a logical impossibility and return once again to #1.
My problem with this is that it seems to presume a ghost in the machine, an indefinable “something” that is more than material. This seems very much just a rewording of the old concept of the soul. As such I feel it violates Occam’s Razor. We can seemingly explain most human behavior in terms of brain science and if hard AI is right we will ultimately (perhaps quite soon) create thinking machines that can do anything we can. So we can’t seem to find or measure this something, this gestalt, and we will probably soon be able to explain everything about ourselves without it. So what does it add? Nothing.
I struggle with this idea too. If you posit it in slightly different terms though it seems not to bother people. Think of the idea of parallel universes. People often imagine that there might be countless universes and countless copies of “me” in each one. Each may be a little different but each is still more or less “me”. And each is conscious. Each is a subject. Subjectivity exists in multiple forms. There are clearly other subjects in the universe. The idea that there might in fact be subjects in the universe (or multiverse) who are at once “other” and also “me” (using any definition we can base on material science) is certainly bizarre. I just don’t see how a strictly materialist interpretation of subjectivity can avoid it. I would simply say that all copies of “me” would have the subjective experience of “what it is like to be me”.
The only thing I know for certain is that “I” am “ME”. I know I’m not another “ME” in an alternate universe, because I am here. I think Occam’s Razor applies here, too. I’m not trying to avoid the idea of a soul, but somehow it seems more intuitive than the alternative.
I believe the existence of “ME” is a purely subjective stance. In so much that there can be NO OTHER “ME”. It’s impossible. There can be countless “OTHERS”, but they are not “ME”, nor will they ever be. Alternate universes included. Maybe there are 13,487,903,285,001 alternate kevsnydes out there in alternate universes, but “I” will never come to know their experiences. Copies are just that. But each with their own “I” subjectivity. There is an ultimate uniqueness about it.
I feel my state of being to be so precious, that I would never risk it on any type of teleportation device. Those that come out the other end might think they are the one-true-original, but how can they know they are not? And they’ll try to convince me it’s completely safe. They’ll say things like, “Hey, relax, I was “ME” going in, and I’m “ME” now… what’s changed?”. I guess we’ll never really know. But I choose not tamper with my own existence. Better the Devil you know than the Devil you don’t. Keep yer damn teleportation.
I’ll say again: “I” am the memories in this meat, and little else.
Take these memories and copy them. Plug them into a sensory apparatus. That is as much “ME” as the thing which wakes up after the operation. Consciousness is a red herring here - we need only speak of memory.
Take someone else’s memories, having removed mine, and plug those memories into my eyes and ears. I am someone else.
SentientMeat: The thing that wakes up after the operation will have your memories, but its own sense of awareness. YOUR sense of awareness will not continue.
I can elaborate greatly on the continuity of thought idea (or at least, my idea of it…). Perhaps if there is interest I shall compose an extensive post.
Imagine a stream. In this stream are rocks and formations that result in whirlpools. Water and other particles are continually flowing into and out of the whirlpool such that its constitution is different at every moment. This is just like your brain, which is constantly changing on the atomic level and constantly changing due to external stimuli. Although the whirlpool is never the same one moment to the next, it does maintain its form. A whirlpool is not a physical construction, but rather, a pattern supported by some medium. Similarly, consciousness is pattern exhibited by the correct physical constructs.
It is quite clear that it would be impossible to have two identical whirlpools. Sure, you could have whirlpools of the same size, same velocity, etc. But no two whirlpools will ever be the same. The only way two have two identical whirlpools would be for them to exist in the same place at the same time, and this of course is impossible. Every whilpool is an entirely unique pattern. This is the same for human beings; thus we see why it would be impossible for identical “I’s” to exist.
Underneath the whirlpool is an accumulation of stones and debris. These are memories. Say we take a snapshot of the river bed beneath a certain whirlpool and precisely recreate this layout under another whirlpool (or anywhere, for that matter. If the layout is the same presumably a new whirlpool will form). This second whirlpool (B) will have the same “memories” as the first whirlpool (A). However, B is quite clearly not the same pattern as A. At any given time, each pattern will be different by virtue of its substance. We see once again that memories are not enough to duplicate an “I”.
Death would be equivalent of removing a stone such that the whirlpool dissolves. The medium is no longer well-formed and the pattern evaporates. The continuity of thought has now ceased. If we replace the stone in its exact position, a new whirlpool will form. This is not the same whirlpool. This is a new “I”. There is no way to reconstruct the original whirlpool short of time travel.
This theory can address issues such as the slow replacement of brain matter with mechanical devices. It is perfectly okay to poke and prod the whirlpool. If done carefully, it may be fully possible to remove a stone and replace it with a new one without destroying the pattern. We can squash the whirlpool, move it around, etc. All so long as we maintain its continuity.
Anyway, that’s how I’ve always thought of it
A whirlpool in a fluid is formed by the rocks and the water rushing over them. If you could duplicate the “rocks” and the “water” atom by atom you would in fact duplicate the whirlpool. And of course, I would say that during anesthesia and even deep non-REM sleep we have a situation analogous to the stream running dry. The “rocks” (brain structures that represent memory and mind-patterns) remain but the flow has ceased. The whirlpool still exists as potential only. When it rains and the stream comes back the whirlpool reforms in the same place. Is it the same one? We are back to the old problem.
The basic tenet of strong AI is that computers can ultimately be created that think in precisely the same way humans do. These machines will experience subjectivity, the sense of an “I”. A computer system consists of the three parts I spoke of in the beginning of this thread, software (mind), data (memories), hardware (brain). If strong AI is correct it will be possible to take a snapshot of such an AI that is absolutely precise. Every bit will be in place. Everything that makes up that mind can be specified exactly. The whirlpool analogy really seems to break down here since it is obvious that we could take the data and mindware from one computer and put them on another and it would pick up right where it left off. There is also the question of what happens when the operating system (a non-sentient component) decides to move the mind from one part of memory to another. I just can’t buy that such a mind is “dying” countless times because it get moved about a bit but that’s what your argument would suggest if AIs and humans have the same type of mind.
Alternately strong AI might be flawed but this puts us back in either a mystical view with the concept of “soul” or something akin to Roger Penrose’s view, which is still materialist but denies the possiblity of strong AI and asserts (as your whirlpool methaphor seems to) that mind involves some kind of quantum spookiness. Penrose fastens on the protein “dimers” in the microtubules as the seat of the self. Needless to say I don’t buy it myself.
I do believe that computers could indeed be created to think in precisely the same way as humans do. However, I do NOT believe that your particular instance of self awareness could ever be recreated.
I’ll expand on some of your other points tomorrow…I feel like ass right now
You will perhaps be aware of the concept of the Far Edge Party?
You don’t just make one copy, you make dozens, and they all zoom off round the galaxy exploring worlds, scouting for suitable candidates for terraforming and for alien civilisations, that sort of everyday stuff;
eventually all the copies meet up after thousands of years of relativistic flight on the Far Edge of the galaxy;
each copy has been changed so much by eir experiences that ey are completely different people; the copies might have a good party, and have lots to talk about, but none of the copies has any illusion that there is only one person present.
These far future people will have developed a new way of looking at the concept of self-
so having a copy made will not be a philosophical wonder, any more than television or the Internet cause much concern to people today.
let me write this out and see if i got what kevsnyde and others have said correctly.
you have the 1 viewpoint where the “I” is a unique entity that remains so despite being perfectly* duplicated. therefore if “I” is duplicated and you have an “I v2.0”, you would now have two separate and distinct entity. the very fact that “I” would be unable to read “I v2.0”'s mind (though he may guess, however accurately) once they are subjected to different external stimuli, proves the point.
then you have the 2nd viewpoint where the “I” is not unique, which becomes a perfect* copy or a replicable entity that can be restored to the “I” state at the moment of scanning. therefore if “I” is duplicated and you have an “I v2.0”, you would now have two separate and distinct entity that is basically expendable, for only the “I” (or whatever updated version of it) stored in the duplicator’s databank is important.
what do you think?
perfectly means everything. including a soul if you wish. even the waters in Trigonal Planar’s excellent example. a perfect copy.
Trigonal Planar makes sense to me. This is my simplified take: I’m put to sleep. While asleep, I’m exactly duplicated, then both copies are woken up. Both SelfA and SelfB will sense continuation of self, but that is not exchangable. There is not continuation beyond the point of duplication. If I’m (SelfA) killed, I don’t continue in SelfB. The $5M I paid for duplication would be worthless, IMO.
This is how I would back up the self. Take the brain out of the body and hook it up to some computer interface for input/output and keeping the brain alive. At the same time, the brain is immersed in some kind of leeching solution, so that slowly, like over a period of 5 - 10 years, the volitile, transient molecules that make up the cells of the brain are replaced with non-volitile, inert, more permanant substances that work the same and are in the same organization and arrangement as the original. This is akin the the saying (not sure if it’s true or not) that all the atoms in one’s body are eventually replaced with new ones over a period of 1 or 2 years. The 5 - 10 year leeching period would maintain some sense of continuity of self.
This is not the analogy I would use. Let us say that whirlpool has a form of self-awareness wherein it can feel what is passing through it - big stones, little stones, bits of debris, etc. REM sleep or anesthesis would be the equivalent of passing pure H20 down the stream, such that the whirlpool is unable to “feel” any kind of stimulus.
Trigonal, under general anaesthetic your whirlpool is as good as nonexistent, agreed?
Let us first consider a normal, everyday surgical operation: An awareness goes to sleep with a unique set of memories. Hours later a thing with that set of memories wakes up and starts making new memories. Presumably, this is the “true you” you posit.
Now, consider the case when, somehow, those memories were copied into another brain during the operation. Hours later TWO things with that set of memories wake up and start making new, DIFFERENT memories.
Question: which one do you wake up as?
Answer: This is a trick question. There will be two things which think they’re the “real you”, when in actuality there is no such thing as the “real you”.
You are confusing sensory input with consciousness. Anesthesia and sleep remove consciousness, not sensory input. The phone keeps ringing but now no one is answering. One also does not lose consciousness because sensory input has ceased. Quite the contrary. Put someone in a sensory deprivation tank and their mind starts to make up it’s own sensory input!
I read about the Far Edge Party on the Orion’s Arm site. My congratulations on an excellent site by the way. It was you guys who turned me on to Ian Banks and Greg Egan.
I’m just trying to follow the materialist viewpoint where it leads and it seems to lead to this. If p[sub]1[/sub] … p[sub]n[/sub] combine to make a greater entity, call it P (big P), and we duplicate p[sub]1[/sub] … p[sub]n[/sub] exactly, then P should also arise from this duplicate. It makes no sense to claim that “that’s not the real P”. It is the real P, as far as science can tell.
But I would take issue with the idea the Iv2.0 is “expendable”. It is not. It is a living mind with all the rights of all other sentient beings.
As I said earlier, this is how I would want to be uploaded myself. I’m not willing to bet my life on my philosophical speculations Incidentally, I think this form of uploading will precede the destructive copying we’ve been arguing over. Getting nanobots to read the state of a few neurons and then take their place will be easier than trying to disassemble and read an entire brain in a relative instant. I don’t think there will be any need to remove the brain. You’ll get an injection of saline with the basic nanobots floating invisibly in it. You won’t feel anything initially but over time your thoughts will get faster and you might notice subjective time slowing down.
And thats not necessarily a good thing, what if you can’t control it? (this was touched on in that novel I mentioned earlier)
Say you manage to increase your mental speed a hundred fold, you start to get all sorts of nasty side effects…that 6 hour flight over the Atlantic now takes 25 subjective days…thats a lot of time to kill…
Exactly. In Egan’s books you do control it though. He speaks of “rushing” where you deliberately slow your mental processing to catch up with real world events. I can’t really forsee a world where you wouldn’t be able to do this sort of thing if you got bored. To use current compsci terms you would use the Unix “nice” program to lower the priority at which your mind runs.