Teleportation physics question

But is that really enough to claim an identity? If I have an origami bird, and transmit the instructions for making it to you, whereupon you recreate the bird, using, say, an identical piece of paper, I don’t think many people would consider this to be the same origami bird—they’d be two instances of the same kind of thing (even if my bird were destroyed in the process), but different things as such; two tokens of the same type, say (to somewhat abuse terminology). So I think such a linkage is insufficient to establish an identity.

I think what gets people tripped up is that we’re conflating between different kinds of identity—say, token and type identity: the two origami birds are different tokens of the same type, that is, type-, but not token-identical. Usually, with people, both notions coincide, since there is typically only one token of any given people-type around. But when we include copying/teleportation-apparata, the notions fail to coincide, and it’s not at all clear anymore whether it suffices to be type-identical, rather than just being token-identical. Prying apart the two notions of identity is, I think, what Mijin’s example with the multiple universes accomplishes: there, it is likewise the case that there are, say, many instantiation of the Half Man Half Wit-type, just as there would be in the case of a teleporter. So if we don’t have grounds to say that they are all the same Half Man Half Wit (as we, for instance, would not do with the origami birds), then what grounds do we have in the teleportation case?

So, the real question is which (if any) of the two kinds of identity we consider to be relevant for personal identity. The argument that’s most often brought to bear here is that what’s necessary is really the identity of the mind, and that that’s more like the message than the origami bird—that if it contains the same information, it’s the same mind, more or less.

But this argument is, I think, problematic, for at least two reason: first of all, the reification of information, although common these days, is on somewhat shaky grounds. For, what really counts as ‘the same’ information? If I translate the message into French, it will no longer count as useful information to anybody not speaking the language—if the message compels you to perform a certain action, you will fail to do so if you can’t understand it. So, actually, the notion of information (different from the potential information that a message may contain, as measured, e.g., by the Shannon entropy), is something mind-dependent—it needs the right kind of mind to actualize the information potentially contained in the particular pattern of graphemes that has been transmitted. What has been transmitted is not really the message itself, but rather, a set of conditions necessary to recreate the message, in conjunction with a mind interpreting it. But obviously, if it’s a mind we’re speaking of transmitting, we can’t really appeal to another mind to serve as interpreter, on pain of circularity. So it’s not clear that the information transmission achieves what we set out to do.

Additionally, even if we accept that the mind is, in some sense, informational in nature, the argument is merely question-begging: for it alleges that the information that has been transmitted is really the same as the information the sender has. But that then rests on the assumption that this is the right notion of identity for messages—i.e. that type-identity suffices for being ‘the same’ message. But that’s not, on its face, necessarily more obvious than that it’s the right notion of identity for minds—one could very well hold that the sender still has the original message, with the receiver merely obtaining a copy, just as I would still have the original origami bird even if you folded an exact replica. So this rests as much on presupposing that type-identity is the relevant notion of identity as the position that the transported person is actually the same person does, and hence, does not do any additional work.

Personally, I’m severely skeptical that in order for the transported person to be me, it suffices that he is an instance of the same type as I am. Rather, it seems that, just as it would be the original bird that would get burned if I burned my bird after the successful creation of a distant copy, it would likewise be me who dropped into the tank after the teleportation, in a The Prestige-like scenario.

Yes…but as I always bring up in these sorts of discussions, in what way are you the same person you were 20 years ago? Or a year ago? Or a second ago?

We don’t have physical continuity with that person 20 years ago. Your atoms have been mostly replaced, cells have divided, memories have changed, and on and on. So it’s the classic Ship of Theseus.

So why should any of this matter?

The important part is, why do we care whether we live or die? It’s because we evolved a survival instinct. And our survival instinct evolved in an environment where matter duplicators and transporters didn’t exist. If such things existed on the African savannas, then apes that used such technologies to duplicate themselves would be more and more common than those that didn’t, and eventually we would evolve into the sort of being who didn’t care whether the original survived as long as one or more copies with better survival potential existed.

Just like I would rather die than have one of my children die. My consciousness does not live on in my children, yet I’d rather for them to live than for me to live. And the reason why is obvious, all my ancestors made similar choices, and their neighbors who opted to live at the expense of their children couldn’t be one of my ancestors since their children didn’t survive.

So that’s why we care. It doesn’t matter a hill of beans whether my consciousness survives, I only want it to survive because it’s me and I evolved to want to preserve myself. But that’s not my only imperative, and I know that eventually I’ll die and there’s nothing I can do about it no matter how hard I struggle.

An origami bird doesn’t care whether it’s the same as another origami bird. And if I want an origami bird I don’t care whether the bird is the same bird I had yesterday, or an exact duplicate. Same thing with a computer program, or a data file, or a book. Which is the copy and which is the exact duplicate is irrelevant.

And it would be irrelevant for human beings, except for the fact that we’ve evolved to want to survive, and our survival instincts didn’t take into account matter duplication.

Of course, the option that there’s no such thing as personal identity is a life one, no doubt about that (the ‘if any’ of my previous post)—which raises the larger issue of whether one should care at all what one’s future experiences are. But I think it’s wrong to say that there is no physical continuity in the case of a person—after all, a gradual change is something very different from an abrupt one, so even if, eventually, all of my atoms are replaced, at any given instant, I am the most similar (in terms of consisting of the same atoms) thing to myself at the instant before, which one could well use to underwrite an identity relationship.

Besides, it’s not at all clear whether ‘being made of the same atoms’ is a good candidate for an identity relation. Another view is four-dimensionalism: just as objects can have parts in three dimensions—a left half, a right half, a top, bottom, front or back—one can view an object as something that extends across the temporal dimension, as well, having future and past parts, say. The object that you are would then be actually a four-dimensional one (defeating the objection from gradual change), and teleportation then brings into existence another four-dimensional object distinct from you.

Sure. You could say “Niagara Falls” is a thing, but the water in it is constantly changing. Same with various lakes and rivers and oceans and hurricanes and tornadoes. Here I’m trying to invoke examples other than living things, obviously.

Living things exist to make copies of themselves. Or rather, living things that make copies of themselves become over-represented compared to those that don’t, and so living things evolve copyabilty. Our consciousness is irrelevant to this process. And in fact we make copies of ourselves and care very deeply whether those copies survive, despite the fact that we know for sure that there is absolutely no continuity of consciousness in those copies.

So why should Captain Kirk care whether a copy of him exists in the future? We care whether we exist in the future because we evolved to care, and we care about the survival of our offspring because we evolved to care.

So evolutionarily it makes no difference whether Captain Kirk, or an exact duplicate of him exists in the future. And if transporter/duplicators existed, we’d expect people who want to duplicate themselves to be over-represented in future generations, and eventually people will have evolved some sort of drive to duplicate themselves, and it will seem normal and pleasurable to them to create duplicates, even if the “original” that creates the duplicates dies. The fact that they know their consciousness won’t be causally connected to the future consciousness of the duplicate will be irrelevant to them, just like it’s irrelevant to me that my consciousness won’t live on in my children.

Well, it’s really only a small fraction of the hydrogen spins that are being read out by the MRI—when you apply the magnetic field, almost exactly 50% of the hydrogen spins will align in either direction, with hence most spins having an oppositely-oriented partner, but there is always a small fraction that remains unmatched (a couple per million), and it’s only those that are important in imaging.

Besides, even if it’s spin degrees of freedom whose coherence the brain uses, it need not be hydrogen spins—a recent proposal postulates that phosphorus is in fact best suited to this task.

Nevertheless, I think it’s still right to be skeptical of any purported quantum effects as underlying cognition. However, in recent years, quantum phenomena have been postulated to be responsible for an increasing number of biological mechanisms—photosynthesis, avian magnetoception, and even the human sense of smell. That’s all still quite tentative, but I think an interesting avenue of research.

Well, the point of four-dimensionalism is rather that one has good reason to care about one’s future experiences—because whatever happens to me in the future, really happens to me; any pain I experience will be my pain, and so on. So trying to avoid bad experiences is not merely due to evolutionary selective pressure, but actually because they would be bad experiences occurring to you. This obviously extends to rationally caring about whether one has any experience, or not.

But even if we chuck the notion of identity wholesale, as just one of those confused constructs of language that actually make no difference, it doesn’t follow that one shouldn’t care about one’s successors. At the very least, whether one’s successor experiences pain is as much a morally relevant question as whether anybody else does.

But arguably, one may have a greater duty of care towards one’s own successors. For instance, my successors are, as far as I know, the only things that see the world from a particular point of view, and I may view that as a good thing, and wish to continue it. Or, I might view the having of successors itself as a good thing, and would neither want to deprive myself, nor any of my successors of it.

And then, it’s again not clear what counts as my successor. Should I want to end the chain of succession, and put a gun to my head, pull the trigger, and blow my brains out, I would consider myself to have succeeded in this even if there is an identical copy of me some 10[sup]10[sup]118[/sup][/sup] m away whose pistol malfunctions at the critical moment, such that he would have successors—those successors would not be my successors.

So, we haven’t really made progress. The question becomes, ‘What counts as my successor?’ rather than ‘What counts as future me?’; but this doesn’t buy us any obvious new ground. Additionally, we now also have a normative dimension—should we value our successors differently from the successors of others? And of course, those are always the hardest questions.

An appeal to evolution doesn’t really help, here—while it can explain to us why we behave the way we do in a certain situation (in some cases, at least), it can’t really tell us how we ought to behave, or whether it’s good to behave the way we do. Thinking it did is an instance of the naturalistic fallacy.

I agree with everything you say here:

But, I’m having trouble understanding what you mean here:

You acknowledge that there is no causal link to your future duplicate’s consciousness, yet you say it’s irrelevant to you whether you live or die? If you allow yourself to die because you have a duplicate, that’s suicide. Suicide may be a lot of things, but irrelevant isn’t one of those things. We’re not male octopuses who have to die after we mate, or make duplicate octopuses, or get into an octopus transporter.

I don’t believe the Ship of Theseus analogy applies to self awareness and personal identity.

While most of the cells in your body are replaced throughout your lifespan, most of the cells of your brain are not. CNS neurons are the longest living cells in your body (they can even outlive your body).
I think it’s significant that neurons are the longest living cells in your body. I don’t think they evolved that way for the purpose of developing the emergent property of conscious self-awareness/PI (evolution isn’t goal-driven that way). However, I think it’s probable that without long-living neurons, temporal personal identity would not have developed.

Personal identity is not the “ship”, it’s more like the “crew” that lives aboard the ship, as consciousness is the supervenient property that lives aboard your brain.

Research suggests that, while most of the neurons of the brain remain intact throughout your life (think of those as the major structural planks of the ship keeping it afloat), neurogenesis also takes place throughout your life (think of that as plugging up holes that develop in the major planks.

So, your unique crew will age and some may die, but as a whole, they will remain alive and functional as long as your ship is well maintained and doesn’t sink. But, what if your ship does sink and there’s an identical Ship of Theseus close by? Can your crew swim to that ship and climb aboard? Heck no, that ship has it’s own crew, will see your crew as pirates and shoot you like fish in a barrel.

…and that, guys and gals, is why you can’t survive a Star Trek transporter.

I’m saying that we don’t want to die because of evolution. It’s not that we’re worried about our consciousness dying, we’re worried about dying. Lots of organisms with no consciousness whatsoever struggle to survive.

But organisms do purposefully engage in behaviors that result in their death, and usually these behaviors are for reproductive purposes. Parents sacrifice themselves to ensure the future survival/existence of their offspring. Even human being do this, plenty of parents would die rather than allow their child to die.

My point is, we have an instinctive notion that dying to protect our children is a good thing. But we wouldn’t think this was a good idea: “Step into this box, we’ll scan you. Then we shoot you in the head and dump your dead body in the gutter. Then we make two copies of you.”

This is because our moral/survival instincts evolved in a context without magic duplication. If we did, then we would evolve to believe that duplicating ourselves this way was a great idea, even though we knew perfectly well that we would surely die making the duplicates.

We want to live and survive and reproduce, not because preserving our particular unbroken stream of consciousness is so important, but because we’re animals who evolved a particular set of survival instincts. We’d try to preserve our lives even if we weren’t conscious. Consciousness is just a particular phenomenon that creatures of our sort have, it isn’t the purpose of our existence.

I guess what I’m trying to argue here is that it’s all an illusion. Consciousness is just a thing that happens. Of course I don’t want to get in a booth that will disintegrate me, even if an exact duplicate of me gets recreated later. That’s because I evolved to not want to be killed. My feelings about it are irrelevant. That doesn’t mean I’d step into a transporter, because it’s me, and I don’t want to die. But if you changed the scenario to “Use the transporter to save your child’s life” I’d use the transporter in a second, even though it’s equivalent to dying.

Yes and that’s immediately why I’d say language of “worried about dying” is misleading. Certain behaviours have been selected for, but an individual organism does not need to have any understanding of why it does a particular action, or any appreciation of its existence being finite.

It’s also important to say that evolution is entirely separate from morality. There’s no reason to suppose that surviving and thriving is right and no reason why a sentient species should therefore try to follow what evolution wants.

I get your point but I don’t think it necessarily follows.

As a sentient being I can decide to live or die for whatever reason. I might die to save my children, or to promote the cause of slicing sandwiches diagonally.

In a world where magic duplicating machines existed in our evolutionary past, then sure, people with a strong affinity for using them would outnumber those who did not (depending on how easy they were to find / use compared to reproducing the old fashioned way).

But still they might ponder the philosophical implications at some point in their development, and some people might become reluctant to use the machines. You might argue a “don’t use transporters” trait would get evolved out…but that’s like saying the “suicide bomber” trait will get evolved out – evolution is not going to make a significant difference on human timescales, and in any case ideas can spread; they don’t need someone to have an inbuilt predisposition.

And even if we were 100% happy to use transporters, that still would not show consciousness to be an irrelevance; merely that humans would be behaving as if that were the case.

If he was truly identical to you, in the instant before pulling the trigger, then he really is you. He has the same thoughts, same body, same sensory input, same mental reflection, same despair (in order to seek self-destruction.)

Since the person who continues, once the pistol fails to discharge, is his successor, he also has to be your successor, because, at the instant before succession, you’re both “you.”

To claim otherwise seems to require a difference of some sort, and that is contradicted by the stipulated identity. Again, the whole thing seems to come to a difference in how people view the philosophical concept of “identity.” Many seem to want to use the word, yet still retain some idealistic (Platonic?) “difference.”

“We’re exactly the same…except that he’s he and I’m me and we aren’t the same.”

(I don’t put much stock in the “coincidental identity” model, anyway, and hold it to be irrelevant to the “transporter” model. In the former, there is no causal linkage, where in the latter there obviously is.)

So, you’d say that the origami bird you have after my folding instructions is the exact same one as the one I have? If I were to burn it, that same bird would nevertheless continue to exist? How many different numerals are there in the number 111232233?

See? That’s my point: you’re using the word “identical” to include “with some differences.” You can’t have it both ways.

Is 1 different than 1, or the same?

What’s the difference you see here that’s not also present in a teleportation scenario or the case of my remote doppelganger?

It’s two different tokens of the same type. The question is which of these notions of identity is relevant to that of personal identify.

An identical copy…oh, but wait, there’s a big important difference.

You have two identical folded origami figures…but you set fire to one of them. Identical? No longer.

This problem continually pervades these debates. “We stipulate the entities are identical. But one of them is real and the other just a copy.” Yeah? Identical, huh?

Is it real, or is it Memorex?

If there are two identical assemblages of you, it’s not appropriate to categorize one of them as *real *and the other a copy. They are both *real *to all outside observers.

The nomenclature only has meaning with regard to your first-person singular observation. You could be conscious in Trinopus Right, in which case, Trinopus Left is the copy, or you could be conscious in Trinopus Left, in which case, Trinopus Right is the copy. But, you can’t be conscious in both.

Conclusion: 2 Trinopuses: Both are real to outside observers. Each is real to himself. Each believes the other is the copy.

Agreed. That’s my take on the matter too.

Once the two entities begin to diverge, and are no longer identical, then there’s room for debate; but as long as they are truly identical, then they’re both identically “real.” Contrariwise (am I Tweedledee or Tweedledum?) if there is a difference that someone can observe, then they aren’t “identical.”

(And, actually, if the two are Trinopuses, then it is not true that “each believes the other is the copy.” If they’re me, they both believe that both are equally “original.” But I’m a special case.)

Oh, you’re a “special case” alright! :smiley:

Right: just like with the transporter where they are identical then we kill the guy in Pod A. So no, you have not given any reason why these scenarios are different.

This will now be the fourth time in this thread alone, that I’ll say labeling one entity as “real” and one as “copy” is a straw man. Two identical Mijins have equal claim to being Mijin, and are equally real (although one or both may have fake memories).
However this does not mean that one Mijin traveled into the other, as demonstrated by my two universe scenario.

I’m honestly not sure what point you’re trying to make here with your cryptic half-statements and your refusal to answer my questions. The thing is, simply, that there’s no problem at all with having the same kind of thing multiply instantiated, and differently individuated—even if they’re identical down to their atomic structure. I can refer to one orgiami bird as ‘that one over there’ and to the other as ‘this one over here’, for example, and ‘that one over there’ is different from ‘this one over here’ by virtue of being over there, and not over here. I can also talk about the bird I folded, or the one you folded (which would be relevant for a four-dimensionalist position, for instance).

So just pointing to their qualitative identity is not sufficient to establish their numerical identity; equivalently, pointing out that they are type identical doesn’t make them identical as tokens. The question with teleportation is now simply what kind of identity relationship—if any—underpins personal identity. Pointing out that both persons will be identical down to their atomic arrangements then simply begs the question.

Agreed: the “coincidental identity” model is different from the “transported” model.

One big drawback to the former is that it can never be observed; we have no way of knowing if such a thing actually happens or not. At least with a transported person or object, we can make detailed observations and experiments.

Let’s not play those games, shall we?

What’s your point, exactly? Is location a property that must be preserved under “identity?” If so, there is no such thing as “identical” objects, since they have to be in different locations. This seems like a pointless definition, as it excludes everything we’re trying to talk about.

(Also, when you first brought up the origami bird, you didn’t specify “identical to the atomic structure.” Quite the opposite: you spoke of transmitting the folding instructions, implying that someone in a different place would take a separate piece of paper and fold it the same way.)

How, exactly, does specifying a (proposed) definition of “identical” “beg the question?”

You and I appear to have a different definition of the term. We’re stuck in a disagreement over basic principles. That makes it hard to go forward, but there is nothing circular in my definition.