Teleportation would destroy the world.

I agree. This is the point of the copying thought experiment - you are instantaneously identical at the point of creation, but after that you begin to diverge. You might grow to dislike, or even hate, your copy, or the original. Especially if here is some dispute over the ownership of material wealth, or spousal relations, or interior decor, or whatever.

As I’ve suggested earlier, this might be a normal fact of life for any future AI entity; if it is a competent and stable program, it will almost certainly be duplicated an arbitrary number of times. Our society will be divided into entities that are easy to copy and those, like humans, that are not.

I would expect some people at least to try to copy and/or teleport themselves by methods that create less-than-perfect copies, just because the AIs will be forking all over the place. And in a few hundred years I’d expect those imperfect copies to be quite convincing. After all, a lot of the detail in the information that makes up the brain will just be thermal noise.

I don’t see how you can agree with my point and yet still say something has diverged. Nothing has diverged; you are two separate entities, and always have been (even at the point of cloning, when your memories were identical).

Just because you can’t tell the difference, doesn’t mean there isn’t a difference. Everyone who is saying that it will still be you is just saying it without giving reasons why it must still be you. I grew up watching star trek, but if this shit was real, I can’t see how it isn’t killing you.

After an amoeba divides in two, which one is the original?

The one on the left

At that point, the process of computation that is your consciousness is identical with its twin. If consciousness is truly a program in process it can be copied, and can then continue to run elsewhere. Since the thought experiment we are considering posits two identical substrates, that program would start to run in two different locations.

It would begin to diverge immediately, because the inputs from the environment are different, and the two programs are not linked by telepathy or any other direct linkage. Some people seem to expect the two instances to share some sort of ‘telepathic bond’ or to share consciousness in some way; this would not be the case.

Perhaps we should consider the AI case more closely. The first successful AI program will probably be copied and sent all round the world, presumably with suitable copyright protection measures to prevent pirating. If an intelligent AI program can be cloned and initiated on a different but identical computer, with all the same memories, would it be the same entity duplicated or not? If not, why not?

To answer my own question here, if it is a necessary property of a true AI that it can only run on a quantum computer, then it would not be possible to clone it perfectly because of the no-cloning rule. But it would in theory be possible to teleport it, so that there is only ever one entity. The AI at the receiving end would be identical to the original, and would in fact be the same entity.

I suspect that human minds might follow the same rules, so that they could only be copied classically (and therefore imperfectly), but they could in theory be teleported perfectly via entanglement to a distant location - but the amount of information required would be prohibitive, as described in Learjeff’s link earlier.

This is entirely unlike what is happening in the transporter scenario however.

In the amoeba scenario, the parent goes through a physical process of splitting.

In the transporter scenario, nothing is happening to the original, and the “division” is happening purely in the mind’s eye of those who believe the original is you.

Brains are not not my field (just my food of choice :D), so I may not be using ideal nomenclature. But, I believe PI is generally the same as “self awareness”, so, if it is clearer, let’s use that term. And to simplify our understanding further, for the sake of this post, let’s assume that Memories + Perception + Self Awareness = Sapient Consciousness (ipso facto, Sapient Consciousness – memories = self awareness + perception). I belief the self aware part of consciousness “works upon” memories to give a total sense of self (reflect on the past; predict into the future), but the two can be separated. Having self awareness without memories would trap you in a constant state of awakening…but, maybe that wouldn’t be such a terrible thing. In my mind, it’s better than having memories with no self awareness, or having my memories commandeered by someone else’s self awareness.

Here’s an interesting paper (PDF File) that discusses the case of a man with no autonoetic consciousness. His loss of memory isn’t complete (or else he could not answer questions), but it’s impaired to the point that he essentially lives only in the present. I believe this gives a glimpse into the qualia world of a Sapient Conciousness minus Memories, leaving only self awareness and perception. If you read his case, imagine his memories draining even more completely. Will he still have self awareness at that point? I believe he would.

“Same” vs. “Different”: Imagine yourself being asked to compare yourself to a replica of yourself who is about to be constructed out of the same type and arrangement of particles that you are. You are not being physically copied in any way, it’s just an extreme coincidence that he appears before you. Take this coincidence further and imagine that your particles are synced perfectly for the total time of this experiment, so there is no point of divergence. You are the “same” insofar as there is no measurable difference quantitatively or qualitatively between you two to any outside observer. The only difference they can claim is that your particles don’t share the same space—there are two sets, one here and one there. I believe, however, that when the original you is asked if you are the same person as your replicate, you will reply, “no”, we are “different” (and your replica will answer the same way). The only people in the universe who know that you are different are you and he. That’s what I mean by “different” in this respect.
To put it in terms of a computer with AI, I don’t think most of you are considering the self aware part of the consciousness equation. I agree that it’s theoretically possible for AI to achieve consciousness that includes self awareness, but I think it too would have to follow the same rules as biological consciousness (i.e. if self awareness is locally subjective in one it must be locally subjective in the other). So, when you boot up the computer, its self awareness emerges and persists, even through sleep mode, until it’s rebooted and a new self awareness emerges.

Summary: memories and perception are non-local, non-subjective events. Self awareness (via biology or technology) is local and subjective. The process of self awareness persists as long as the matter upon which it emerged remains essentially (totality is not needed) intact.

I accept that most of you disagree with my premise (the multiple death model of reality could be correct, I just don’t think so), but do any of you agree?

Question (assume that you can separate self awareness from memories): would you rather have your self awareness with someone else’s memories (nice memories), or someone else’s self awareness with your memories? I’d take the first option, since I believe the second option means my death. Consider, the case study man in the linked PDF article. How would he feel if his brain was slowly imprinted with someone else’s memories. I think he’d like that. It would be an acid trip of an experience, but it’s got to be better than having no memories at all.

Qualitatively identical, yes. Not numerically identical however.

Actually whether consciousness is a program is a big “if”. Even assuming that the brain is a machine it doesn’t follow that the mind is a program, or Strong AI is trivially true, contrary to popular assumption.

I am not assuming that, I am saying that is what would have to happen for me to consider two entities to actually be one and the same.

Put it this way: you’re fine with saying that two entities with different memories are not one and the same. I am not you, for example. When I die, I won’t wake up as you, right?

Now, what I’m saying is, the same is true of an entity that does have the same memories. Because it’s not the fact that your memories are different to mine that is the reason I won’t wake up as you. The reason I won’t wake up as you is because there is no physical basis for that happening.

I disagree. If minds are wholly a function of physical brains/bodies, and if in the hypothetically perfect copying process, the chain of cause and effect in all the machinery being duplicated is logically unbroken, then the fact that the original is still there is a trivial detail.

That’s what confounds this debate every time - the concepts of original and copy are no longer meaningful if a perfect copying technique exists.

Anybody seen a fly with a white head? Please catch it, and let me know. No swatting!

But what does logically unbroken chain of cause and effect actually mean?

The original has one chain of cause and effect that leaves him in location A (or dead). The copy has another chain of cause and effect that creates him from scratch at location B.

I don’t see why we should make this special case within Causality, that as long as you end up with two entities with the same intrinsic qualities, even if they have different extrinsic qualities, we have to consider their chains of cause and effect to be the same.

But the original is part of a chain of cause and effect that has resulted in the existence of the copy, so the copy can be said to be causally linked to the original. And since the imagined nature of that linkage is such that they are identical, then they can be considered to be part of the same process.

If you must cave a continuous chain of cause and effect that links you to the past events in your timeline, that rules out of consideration your Tegmarkian double at 10e10e28 metres - but not any multiverse copy that might exist. I wonder what your attitude to the proliferation of Mijins that might exist in the Many Worlds interpretation? Do you pick one at random and discard the rest?

It means that the copying process not only needs to capture and reproduce all of the atoms, etc, but also needs to capture and reproduce what they were doing - thus, not only does the copy look like the original in every way, it carries on doing what the original was about to do - including the ‘doing’ of manifesting a mind in the machinery of the brain etc.

A cause playing out at the precise moment of scanning the original has an effect as normal in the original, and exactly the same effect in the copy.

An amoeba does not possess the attribute we are trying to discuss. Neither amoeba possesses the original’s consciousness because the original did not have a consciousness in the first place.

That’s irrelevant in the context of the question I asked which was solely about the concept of identity.

But you’re asking the question about something which does not have identity.

Of course it does - it’s an object. If I loan you a fiver, we could definitely talk meaningfully about whether you paid back ‘the original’ note (unspent) or a different one - that’s the concept of identity I’m talking about.

The question was intended to highlight the inadequacy of certain concepts in certain circumstances - the amoeba is an object - we could stain it purple to be able to reliably distinguish it from other similar objects - we can look away and look back, and be confident that we’re looking at the same, original object.

But when the amoeba divides, where does the original one go?