OR#2 would be a different entity, because it consists of elements removed at different times. These different elements would need to reintegrate to form a new consciousness; the later parts of it would include memories that the earlier parts do not include. It might not be possible for neurons extracted at different times to reintegrate into a whole, viable mind again, but if they managed it they would be far from identical to the original.
I agree that it would be a different entity, and the incremental reintegration of brain structure and hence memory differences could be a contributing factor to the entity being different, but I don’t believe it’s the primary reason (nor the one I want to focus on). I ask that you don’t fight the hypothetical and assume that the surgeon is so talented that he is able to build back the original brain to it’s original form and the neurons will cooperate.
My point is that I believe self-awareness does not *necessarily *travel with the original hardware (although we have no examples to the contrary…yet). It travels with the uninterrupted lower-order thought processes of the software. As long as the LO thought processes are maintained (even with incremental replacement of the hardware parts) I believe the original PI will be maintained.
In the case of OR#2 you, the LO thought processes were terminated upon dissassembly of OR#1’s brain and as a result, your PI was likewise lost—never to be found again. OR#2 you developed a new PI sometime after the original brain cells were re-built to their original form and function.
This is in contradistinction to the popular argument that *“you’re not the same person you were yesterday, because some of your particles have been replaced.” *I argue that you *are *the same person because your brain processes have been maintained (even through general anesthesia, etc.).
IOW, it’s the motion of the ocean that propels Theseus’s ship, not the droplets of water.
…alright, I suppose technically it’s the wind that propels sailing ships, but that doesn’t sound nearly as clever. ![]()
How about if we could do the operation really quickly, so that the original brain ends up inside a clone body within a relatively short period?
No, that wouldn’t work, since it would give the substitution process in the original brain time to work. I like the substitution method - gradual uploading, as I generally call it - but it is worthless if you don’t give the substitute neurons time to integrate with the rest of the structure.
Everything about this thought experiment suggests to me that my Personal Identity would be better preserved in OR#1 than in OR#2.
Well this comes up a lot in discussions of this kind. The first and obvious answer to the question is that we don’t know that the mind is a program, or in other words, how far to take the brain=computer, mind=program metaphor.
But I think more important than that even, is that we don’t make any claim about identity when it comes to program.
I may refer to the same operation as a “move” in one breath, and then as a “copy and delete original” the next. All I care about is the effect on me. So I also can delete programs without hesitation if they are of no use to me.
But yeah, if you hand me a USB stick and tell me it has a sentient mind on it, then of course I would hesitate to delete it. And I also would not assume move = copy + delete. So it doesn’t actually tell us anything.
Yes this seems to be an excellent way of putting it.
Note that the “third option” of biting the bullet and saying that there is never continuity of consciousness could be stated in this terminology as rejecting the proposition that consciousness can exist in more than one time.
It’s both simplifying and more specific.
ISTM that we did not identify the main problem with you throwing in the requirement that source and destination must be data/causally connected.
It is nothing so trivial as qeasiness over adding an ad hoc extra requirement.
For most of this thread you’ve been arguing that a transport follows from the fact the two entities are identical. That being identical necessarily entails that it’s the same person.
For example:
“If they’re identical, then all of them are ‘really me.’” (Post#50)
“If the duplicate is ‘exact’ then it really is Jim Kirk.” (Post#71)
“If, at some instant, they are truly identical to me, then, yes, they “matter” to me, because they are me.” (Post#92)
(I stopped once I had 3)
Now, when I brought up the multiverse hypothetical you did not (this time) retreat behind the excuse of saying I’m using the word “identical” in a different way: because I said as identical as they are in the transporter scenario.
No, you said that there needs to be a data connection; that spontaneously being identical is not enough.
But this leaves the rest of your argument in tatters. Being identical is somehow at once enough to say, apriori, it’s the same person, and a transport has happened and not a necessary condition to say that, such as in hypotheticals you don’t like.
Note that any particular consciousness can only experience events that have happened in the past; because of transmission times over sensory links, all information we have about the past is already out-of-date. Indeed, if a person were to be instantly teleported to another location, complete with their body as it exists in that instant, they would still be receiving data from the original location via their sensory nerve links, since the messages would still be arriving from their eyes, ears and so on. To them the teleportation event would occur a fraction of a second after it actually did.
We can only ever be aware of the past; if that past includes a teleportation event, we would not be aware of the ‘extinction of personal identity’ that seems to worry people in this thread so much. Why be concerned about an event that cannot ever be experienced?
The particles that make up your physical body are replaced all the time as you consume food and your body perpetually repairs itself. I’ve heard that your entire skeleton is replaced 6 times in your life (if you live long enough). If we assume that WE are still the same people we were as children (as “person” is still an arbitrary concept that we apply for convenience, such as collectively calling the organic molecules that constitute yourself this name because we attribute related properties to them, such as moving through space with each other or by being supported through the same internal biological system, I think we can say that the ‘thing’ that steps out of the recieving end of the machine is definitely a human) this ‘human’ has your memories and thoughts. It is identical to the point that every test performed on you would conclude that you were the same before and after you teleported. You could be speaking mid-word and finish talking at the moment you step out of the machine. The person that stepped out of the machine would have a recollection of a complete, continuous timeline from getting into the machine and getting out. I think then that the question is not whether it’s still “you” but whether the flakes of dead skin that collect around your house, the sweat that evaporates off your skin, your own biological waste still gets the same classification. It used to be a part of you (although a lot of it was once your food, dead blood cells and other cells will certainly be present in this waste). Clearly, our current definition of what and who a person is is based by their own experience and the beliefs of those around them, not their physical form as that doesn’t stay the same anyway.
I think that classifying the person that steps out of the teleporter as someone else would require that we issue new identities to regular people every few years.
You probably also cannot ever experience a bullet entering through the back of your head, but that strikes me as an eminently reasonable thing to be concerned about.
But gradual change is something very different from an abrupt and total change, or even replacement—if I make a small tear in my origami bird’s wing, it’s still that same origami bird; but transferring the origami bird’s folding pattern across the internet, creating another one exactly like it in whatever respect you care to stipulate, nevertheless is creating a different instance of the same type of thing.
Take again the four-dimensional view: an object has temporal parts, just as it has spatial parts; that the temporal parts differ from one another is not any more problematic than that your right side differs from your left side. But a teleportation event, on this conception, is the destruction of one object, and the creation of another—or rather, since ‘destruction’ and ‘creation’ don’t make much sense from a four-dimensionalist viewpoint, the thing that steps into the teleporter and that thing that exits it somewhere else are different from one another. One thing’s ‘future boundary’ abuts the entering, just as you may have your right boundary abutting on a wall, while the other thing’s ‘past boundary’ abuts the exiting of the teleporter.
Yet here we are, passing information merrily back-and-forth between continents, creating a process called a ‘conversation’; if we view this conversation as a four-dimensional entity, it would be continuous, with no breaks - but part of it would exist as electrons or photons zipping back and forth in space, or stored as magnetic patterns in media. Seen as an informational entity a teleported consciousness would have no breaks, even if it has periods of inactivity while it waits to pass through the teleportation router. Hmm; given a big enough piece of paper you could write it down and send it by sneakernet.
But what do you think about the point I just made in post #184?
We happily copy or move data because it has no first-person viewpoint so all we are concerned with is the third-person aspects. And we’re happy to use terminology such as “move” and “copy + delete” interchangeably, because who cares.
OTOH, if I knew a program was conscious (which we have no reason to assume is possible BTW: it’s the “strong AI” proposition), then I would hesitate to do any of these operations (nor would I assume move = copy + delete) because I would know that there is a first-person perspective as well as my own.
So the argument is basically null: if I accept the implicit premises then I would already be on the “you are transported” side.
Exactly so. I am on the “you are transported” side, and I would have no problem writing an AI program out in source code on paper, then typing it all in again at the destination machine, and accepting it as the “same” intelligence, with the same first-person perspective that it had originally.
This is the “pipeline” model of communications, and is pretty standard in information theory.
It all seems to come down to a matter of personal interpretation, and words do not suffice to persuade people of one viewpoint to accept the other.
(So…is the Mission Impossible theme in 5:4 time, or not? I say yes, it is…)
But consider each message as written down first on a piece of paper, then copied onto another, and sent by mail—again, the four-dimensionalist would hold that the copy remaining in my hands would be one object, and that created by copying quite another. Similarly so with electronic communication.
This again underlines the distinction between token- and type-identity I made earlier. Take the string ‘apple apple’. When asked how many words are in that string, most people would probably answer ‘two’ (since there’s one, and then there’s another). Now, I send one to you, and keep one to myself. Do we both have the same ‘message’? Yes, in terms of type-identity—we both have a word of the same type. No, in terms of token-identity—we possess different instantiations of the same word-type. So, just as well as we consider the two instances of the word ‘apple’ to be distinct, we can consider both instances of a given message, or a mind, to be distinct.
Indeed, we probably should: again, failing to observe this distinction results in logically contradictory propositions, such as ‘I lift the origami bird and I don’t lift the origami bird’, if I lift the instance of the bird on my desk, but not that which has been constructed in some far-off place, but nevertheless insist that both are ‘the same’.
A good indicator of whether two things are one and the same is to test whether one can substitute one for the other into propositions without changing their truth (this has its limits, but those concern usually only appellations and other matters of convention)—so if Hesperus is Phosphorus, then I can say ‘Hesperus is covered by a thick cloud cover’ if and only if I can also say ‘Phosphorus is covered by a thick cloud cover’. I can’t do this with ‘my origami bird’ as opposed to ‘your origami bird’—the truth of ‘I lift my origami bird’ does not entail the truth of ‘I lift your origami bird’. Hence, the two aren’t identical.
And furthermore, even if one agrees that something is transmitted in the case of internet communication, it’s not clear that the transmitted thing is the message—if, for instance, it were to be translated into French en route, then an English speaking recipient could no longer extract its meaning, while a French speaking one could. Thus, you need the right kind of interpreter in order to transmit a message, using some pre-agreed upon code. Hence, what is transmitted is not the message itself, but rather, a set of conditions—constraints—that allow, for a mind capable of ‘decoding’ it, the message to be reconstituted by an interpretive act. In other words, the medium, the set of graphemes or electrons or magnetic polarizations, is not the message. But then, why should one accept that the medium is the mind (that was to be transmitted)?
Why does consciousness have to be tethered at the subatomic level? If it needs to be perpetually tethered to anything, I believe cellular level (neuronal) dependency would suffice. And, as I discussed and cited previously (post #146), CNS neurons, for the most part, remain intact for the long haul of your conscious life. If you insist on going more elementary, I believe the important constituent particles of your brain remain intact for your lifespan, too (although, I can’t currently find a cite for that claim).
Of course particles involved with CNS metabolism will be constantly exchanged/replaced, but unlike some people (e.g. Walmart shoppers), I don’t believe my sense of self resides in psychic poop.
And, also as mentioned before, I don’t believe self awareness necessarily has to supervien on any permanent matter anyway—so long as lower order sentient thought remains flowing, individual structural support cells or particles can come and go as they please (self awareness is like an Ed Sullivan Show performer: just keep the plates spinning or they’ll drop to the floor and shatter).
Let’s try this: walk into the departure pod of a transporter. This model of transporter simply copies the exact configuration of your constituent particles at the moment you push the button marked “go” and builds you back instantaneously at the arrival pod from a slurry of similar particles. If you (correctly) believe no particle is privileged, then this is a perfectly valid type of transporter (in theory). In this case, departure pod you remains alive (IOW, there are now two of “you”). However, you were not told what type of transporter this was and that the original you would not be dissassembled.
Departure pod you (DP-U) and arrival pod you (AP-U) are then sequestered into separate rooms and questioned:
*Do you feel that you were successfully transported?
*DP-U: No, it failed. I’m still here, not there.
Did you feel anything strange, like maybe you were in two places at once even for a moment, when you pushed the button?
DP-U: Nope
What would you say if I told you that there’s now two of you?
DP-U: I’d say you’re nuts.
Do you feel that you were successfully transported?
AP-U: Yes, it succeeded. I’m now here, not there.
Did you feel anything strange, like maybe you were in two places at once even for a moment, when you pushed the button?
AP-U: Nope
What would you say if I told you that there’s now two of you?
AP-U: I’d say you’re nuts.
If you believe these responses are realistic and accurate, as I do, the following conclusions should be drawn:
[ul]
[li]There are now 2 valid instances of you indistinguishable from each other to all outside observers.[/li][li]The transporter failed for DP-U, but it was successful for AP-U[/li][li]DP-U has nor had any real future or investiture in AP-U. If his particles were dissassembled in the DP (like a normal functioning transporter), he would simply be dead. [/li][li]Other than needing DP-U’s particle array blueprint to be born, AP-U has nor had no link or investiture in DP-U. [/li][li]DP-U and AP-U are just two guys who look alike and have shared memories up to the point of DP-U pushing the button. They would be linked no more than if a unicorn came out of the arrival pod.[/li][/ul]
Conclusion: If you’re the you who comes *out *of a transporter then you transported successfully. If you’re the you who steps into a transporter…, well, sucks to be you. It’s like booking a room at the Roach Motel: you can check in, but you can’t check out.
If you believe the above scenario is not accurate or realistic (within the boundaries of thought experiment), please explain why and how. Remember, the point of this debate is concerned only with the fate of you before you push the button and whether or not you should do it.
Is your objection to the type of transporter I described? I believe the outcome would be the same no matter what type of transporter you use (unless you’re thinking of a car or something similar). Is your objection to the fact that both copies of you diverge after transportation? If that’s the case, how can any theoretical transporter be successful for the departing person? Also, I said at the point of pushing the button, both copies would be configured exactly (momentarily converged), and yet neither party would feel anything different or shared in that moment. Is your argument that the arrival pod you is no different from the you who lives in the future (because of particle inpermanency, staccato reality…)? If that’s the case, then transporters would still be unsuccessful…but so would just living. In a very real sense, there would be no point in doing either one.
I think it feels contrary to our understanding of ourselves to accept the statement “there are two of you” but I think this is because such a thing is impossible at our current level of technology and has never been possible before. Identical twins are very different than EXACT copies of a person (I have an identical twin, so I’ll vouch for this).
I’d compare this phenomenon to time zones. Before we could communicate near-instantly over large distances, the fact that different people experienced different times of day simultaneously was neither an issue or likely a thought for the majority of the world. After the advent of telecommunications, it became incredibly important to manage the time at each longitude of the earth.
I also think that the “sucks to be you” mentality for the person stepping into the transporter is false. Obviously yes, that person is dead but it’s only our paradigm that tells us that this matters. AP-U would remember being DP-U. We’ve established that this fantastical teleporter has made an exact copy of DP-U, so the fact that they aren’t ACTUALLY the same person is inconsequential, it is our brains that tell us it is. If you didn’t tell anyone how the teleporter worked and they were unable to look inside and see how it worked, it would be completely impossible to tell the difference between DP-U and AP-U. The only living person that would have a recollection of the process would be AP-U, who would remember the entire existences of both AP-U and DP-U perfectly.
Consider an image file on your computer. You copy and paste it into another directory on your computer and permanently delete the original. Does it matter that the new photo is a copy? It is data-identical to the original but in a different location. Do you weep for the original image file?
I specified that these people would be atomically identical because isotopic analyses can reveal information about where a person grew up and what foods they ate and I want no test to be able to tell the original and duplicate apart.
Translating a message into another language alters it considerably; look at the several different versions of War and Peace in English, for example. Since the teleportation experiment assumes that no data is altered, and that the body/ brain that is transported is also identical, then there is no room for a translation step in the thought experiment.
However, your ‘translation’ intuition-pump does seem to have at least some relevance - if one were to attempt to ‘upload’ a human consciousness and run it on a computer, it would be necessary to translate the consciousness program into a language that could be run on an electronic substrate. This would almost certainly mean some of the consciousness’s finer biological subtleties would be lost in translation. However that would be a different process entirely, and outwith the constraints of our magical teleportation thought experiment.
Not any more than translating the symbols you enter via the keyboard into patterns of ones and zeros, and then into patterns of lit pixels on a screen, does. In fact, the message stays the same, it’s the medium, the code, that changes—if it, for instance, contains instructions for folding an origami bird, then a French speaking recipient will be able to follow them and fold the bird. Contrarily, somebody who speaks only French will not know anything to do with the ‘original’ message. The particular set of signs used to express the message is not itself the message.
Whether or not there is any translation is irrelevant; the point is that it’s not the particular pattern of signs, or electrons, or whatever that contains the message—rather, those patterns only furnish a set of conditions such that the right kind of mind (one capable of interpreting the signs) can reconstruct the message. ‘Apple’ doesn’t mean apple because of any intrinsic quality of that particular set of signs, but because of a learned association between those signs and apples.
Certainly, some form of translation also has to go on in the teleporter setup—otherwise, you would be sending the person wholesale, as if by train, or something. That is, there needs to be some channel via which the information, the pattern of atoms, neuronal excitations, or whatever else you deem relevant, is transmitted, in the process of which what is being transported is not instantiated in the same form as the original person was—say, the original is scanned, and then the information is transmitted through a data channel, or whatever, just as in the case of a brain upload.
But again, whether or not there is any translation really is irrelevant.
Before we agree to disagree…any thoughts on Post#185?
Again, the analogy to computer files is only convincing to those who’ve already taken a “side”. If a computer file could be conscious (which again, we don’t know if this is possible), then yeah, I would not make any assumptions about identity being preserved by copy + delete.
I copy and delete files with impunity because they have no first-person perspective.
Sure: I don’t agree with you there. I think we’re talking past one another and using words differently. In my opinion, you’re completely wrong.
What in the world do you want me to say? Am I supposed to cringe like a villain in a Jack Chick comic? (“Oh! You have demolished me with your clever logic! I never thought of that! What a fool I have been!”)
(Today’s “Tom the Dancing Bug” plays on that riff.)
Dude: I do not agree with you. To me, the “coincidental identity” idea is not convincing. It doesn’t tell us anything about whether a Transporter “transports” or “kills and duplicates.” It doesn’t answer any questions.
I’ve been away from this thread for two pages. I see **Trinopus **has been holding up (something close to) my POV valiantly.
From my POV the only meaningful question is whether the transported consciousness believes it was transported. If it thinks it used to be in the other place, then from a *functional *POV it was transported. If you transport somebody to an adjacent extremely similar room and don’t destroy the original, and interview them both and they both honestly assert they don’t know whether they are in Room A or B, then you’ve created a *functional *duplicate. And if not then not.
Debating whether that’s token- or type- identity sounds sensible, but just moves the goal posts into assumptions about the nature of the words “token”, “type”, and “identity”. Yes those words have precise meanings in computer science. And perhaps in general philosophy.
But not in transportation science which hasn’t been invented yet.
I *believe *that our eventual research in this area and in AI will show us that consciousness is merely an emergent property of the atoms that make up a brain. And as such, if we duplicate the atomic structure to a sufficient level of detail, we duplicate the consciousness. I agree that as of today this is an open question of science and thoughtful people are free to believe that future research will provide a different answer.
Having carefully read the last two pages I saw exactly one novel and interesting idea. That was Half Man Half Wit’s comments about 4-dimensional continuity.
If indeed future physicists find that 4-dimensional continuity is a real property, and all existing things have a “tail” that extends into the past yet is still fully attached to the present-object and is an intrinsic part of it, then that would pretty strongly say that *philosophical *duplication / transportation would not work because the newly transported / duplicated object(s) would lack the long tail of the “original”. The original and the other(s) would be forever different in philosophically meaningful ways by the differences in their tails.
But, depending on the ultimate nature of consciousness, this may or may not impact the working of *functional *duplication / transportation. If memories and other conscious functions don’t depend on that tail, then the fact the duplicated / transported instances don’t have the same tail would be *functionally *moot.
Overall I see some folks arguing certain things are provable in this debate. I see no evidence anything important is provable; there’s a lot of conjecture and little else. Including by me.
And as Trinopus says repeatedly, there are people asserting and assuming meanings and consequences for words that are not in fact universally held or even evident in the conversation.