The Clone Paradox

This is interesting because (not sure you agree, but let’s see) consciousness and personal identity is a function of our bodies. Consciousness is to body as writing is to pen, or spending is to money.

So in your pen analogy - yes, this pen is not that pen, but both are capable of the same function. This coin is not that coin, but both of them are capable of being spent on a stick of gum.

So this body that steps out of the duplicator is not that body that stepped into the scanner - but both are capable of performing the function that is called ‘me’ (independently of one another, of course).

Consciousness needs to be differentiated from self-awareness (SA). SA is a higher order thought process and that is what’s important in this debate. One test for SA is the Mirror Test (put a dot on the forehead of an animal and see what it does when it sees it’s reflection in a mirror; if it scratches its own forehead it passes the test and this suggests that it’s self-aware).

Many animals are conscious, but only a few appear to be self-aware (most great apes, bottle-nosed dolphins, elephants…and probably more to be discovered).

I read someone elsewhere describe consciousness as being “digital” and self-consciousness being “analog”. That’s an oversimplification, but it suggests that consciousness is subatomic and may be replicated instant by instant as a series of static configurations, whereas SA is a molecular process that can only remain intact by being unbroken throughout your conscious existence. That’s the way I see it, anyway.

For simplicity sake, lets again consider the transporter (the type that converts your particles into energy in the departure pod, streams it to the arrival pod, then converts back into “you” in, let’s say 15 seconds).

This debate has to do with your “departure pod self’s relationship with your “arrival pod self”, compared to your “departure pod self’s relationship with “you in 15 seconds” without traveling in a transporter (let’s say you just walk to the arrival pod to highlight that you change location in both cases).

Is there a fundamental difference with regard to your SA between those two scenarios?

In this thread, we appear to have two answers to that question. Mijin and I say that there is a difference; **Mangetout **and **Trinopus **appear to say there is no difference.

However, **Mangetout **and **Trinopus **appear to give different reasons for there being no difference:

Correct me if I’m wrong, but **Mangetout **is saying that although both 15 second instances of you have consciousness and SA, departure pod you doesn’t really have an aware future in either one (i.e. they are independent). While I don’t agree with this assessment, it is logically consistent (it’s a matter of faith in ones philosophy).

However, Tripopus, believing that “we don’t ‘die’ every instant” appears to be saying that “departure pod you” does have an aware future in both 15 second versions of you. I’d like this to be true (it opens the possibility that I may live again after I die), but, it’s logically inconsistent—it’s a paradox, because if there’s one perfect clone, there could be many and you can’t be self-aware in more than one location.

As I learned toward the end of this short :smiley: thread from 10 years ago, **Mangetout **has as little chance of changing my philosophy of mind as I have of changing his (until science finds evidence that one philosophy is the correct one, both are valid because both are at least logically consistent). All we can do is try to convert the *undecideds *one way or the other, or at least ferret out logical inconsistencies.

In the meantime, the only transporter I’ll step into is a **Futurama **type, where my body gets sucked to a different location intact.

I’m not sure that’s my position. I would say that I have an aware future in any number of future instances of me - either just one instance as per usual, or more than one instance, in the unusual case of transporters or any other magical-exact-copy techniques.

I won’t be both of those people at the same time, but they will both have as much claim to the notion that they are me, as I would if life carried on as normal.

I’m not sure if analogies are useful, but lets try one:

I work in IT. Yesterday we decided we needed a new print server. We cloned one of the virtual machines in our data centre. The VM host system kept track of which was the ‘original’ and which was the copy (e.g. as an extrinsic property), but the viewpoint of the machines themselves was that they each were the original, carrying on as normal.
The cloned machine even carried on trying to print the documents that the original had been queuing immediately before the cloning.
In every sense that matters to the machines themselves, both were as ‘original’ as the one they descended from, especially as the ‘original’ is just a collection of processes that is never the same from one moment to the next anyway.
What we as end users percieve as a ‘print server’ has continuity of process, but is running on different bits of different host servers at different times. We consider it to have persistent identity (and the machine itself thinks it has persistent identity too!), but when we clone it, both machines now think they had persistent identity - and they sort of did.

I don’t see any way around it. It’s Plato’s “What is a Man?” paradox. How much of a person can you chop away, surgically or by accidents, war, etc., and still have a “man” left? It isn’t really answerable, but there has to be a line somewhere.

Same for “How Long Ago Were You Really You?” Obviously, we grown-ups aren’t infants any more, so, over the entirety of our lifetimes, we’ve changed. I’m definitely not the “same person” I was when I was 20. On the other hand, I’m the same person I was twelve seconds ago. So, somewhere within those painfully broad zones, a line must be able to be drawn.

Existence proofs are always easier than construction proofs!

Far from it: I admired Mangetout’s post, and agree with it completely. I think he’s completely correct, and that consciousness/self-awareness are functions of the body. Identical bodies would have identical self-awareness.

I swear to you I am not just being difficult, but…I don’t really comprehend what you say in this post. I’m sorry. I’m not just being an obstructionist jerk.

I don’t think my views are logically inconsistent. I believe you can be self-aware in more than just one location. Or two. Or twenty thousand. Just keep the replicator running all the time: where’s the limit?

I don’t think this has anything to do with living after death, because I don’t believe there are any identical replications of myself out there in the universe. (The “infinite universe where everything happens an infinite number of times” idea is interesting, but, at present, I do not believe in it.)

Or, perhaps, the wormhole in either Star Trek: Deep Space Nine or Stargate?

What if James Blish was correct in “Spock Must Die” and the Transporter doesn’t disassemble anything, but causes every particle in the target to undergo a quantum jump to the destination?

(If an electron tunnels through a potential barrier, is it the “same” electron? There are actually different ways of interpreting the event! To the best of my knowledge, no specific interpretation has ever been proven correct.)

I was trying to work out a similar analogy based on “mirrored” hard drives. As long as the two are mirrored, there is no meaningful difference between “original” and “duplicate.” All read/write functions are performed by both. Incoming data is split; outgoing data is re-combined.

Only once the mirroring is broken is there any difference. But since that means they’re no longer identical, there is no paradox.

I’m not even worried about that - the point is that from every moment downstream of the split, both instances legitimately considered their common history to be its own.

Yes broadly agree. I think the analogy of brain=computer, mind=program can be taken too far, but that’s an aside, you of course have not mentioned such an analogy.

It’s a good point, you’re right about the mind as a process.
However, the argument seems to be: “The mind is a process, therefore if Mind A is the same as Mind B, Mind A is Mind B”, and this is very much a disputed claim.

In our daily lives, sure, we say this financial transaction is interchangeable with that financial transaction but that’s not because we’re making the philosophical claim that for processes, numerical identity = qualitative identity. It’s because the difference is of no consequence here, so we only care about whether they look the same from a third-person point of view.
But the difference between those two kinds of identity is absolutely critical when it comes to anything with a first-person perspective; it gives different answers to the transporter problem.

Here in fact, the analogy with software is actually quite telling. Because, when it comes to programs, we use whatever terminology is most useful to us, since it makes no difference either way. The same operation, I might refer to as a “move” in one sentence, and a “copy, and delete original” the next. But I don’t think anyone here is trying to argue that all 3 solutions to the transporter problem are equally correct (what would that even mean)?

Yes but these “heap” problems (“How many grains of sand before you have a heap”?), involve drawing a very different kind of line from the one that you need to draw here.

Heap problems are just classification problems, and it’s fine for the line to be fuzzy, and for it to be somewhat subjective.

But the line between “Neil is transported, but with errors” and “Neil is not transported” is literally life on one side of the line, and death on the other. And the position of the line would be a fact about our universe, it cannot be subjective
(People may disagree whether moon_neil is similar enough to earth_neil in personality or whatever. But whether earth_neil lives on, from his own perspective cannot be subjective; either he does or he does not).

A very similar question can be asked about people who suffer brain damage. How do we handle those cases?

I thought of another fun (and impossible) hypothetical: What if we had a Rejuvenation Machine?

It would return your body to any younger age you want. You can be twenty again! Wow, peak of health! (…if you were actually unhealthy at 20, that’s a bummer; it returns you to your body at an earlier age, not to an ideal body.)

The price? You lose all the memories since that age. You can be twenty…but you lose all memory of your 30s, 40s, etc.

(Now, you still have your external memories… Photo albums, etc. I’d still have all the stories I’d written, all the computer programs I’d created, etc. The past wouldn’t be erased, just not personally remembered…)

Is it a “murder” machine? Is the individual “killed?” It’s like the joke about “being beaten half to death.” If I go back to age 30, I’d be half erased. Half… Killed? Is “killed” the right word?

What if I just go back four minutes? I lose the last four minutes of my memories. BFD, right? I’d accept that in return for a nice hot fresh cup of coffee right now!

Somewhere in between, the price becomes unsupportable. But, even if I do go all the way back thirty years… Is it being “killed?”

In my opinion yes, I’m not nearly the same person now as I was when I was twenty.

But that is an interesting scenario, I’m not sure at which point it goes from ‘definitely not’ to ‘maybe’.

And I’m one of the people who doesn’t believe restoring someone from backed-up memories provides any useful benefit for the person who continues to exist after the memories are recorded.

So basically…hhhhhhmmmmm…

I am not a biology expert, but I doubt a person can be cloned even down to memories as memories are not created as a matter of DNA (hence, cloning DNA won’t do that) but chemical stimulation over time. Unless, of course, we are not talking DNA cloning.

Grin! I think, honestly, that “hhhhmmmm” is the real take-away from this thread (and all “Star Trek Transporter” threads in general.)

No, certainly, some sort of speculative, futuristic, perhaps even “magical” technology is needed. Scientists have succeeded in “teleporting” groups of atoms and even molecules – or, since many disagree with that terminology, transporting the “defining information” of matter – so it isn’t absolutely beyond the imaginable reach of science. But, by and large, we have to just speculate on Transporters, Wormholes, Replicators, and the like.

Yet another hypothetical:

If very small differences in molecules between an original and a duplicate define a “life and death” difference… Jack comes home from work, watches TV a while, gets angry (Dodgers lose the playoffs!) and gets drunk. Stinking, puking, falling-down, roaring, weeping drunk.

Is he really “Jack” any longer? There’s a huge chemical difference. If Jack teleported, and the result was as different as the difference between sober Jack and drunken Jack, we’d hold the teleportation to be a major failure. Is Jack “dead” and replaced by something that mistakenly thinks he’s Jack?

There’s no arbitrary line when it comes to (non-fatal) brain damage. We agree the consciousness continues, but with damage, regardless the severity of their injury.

The question I’m asking is at what point you draw the line between “transported, but with damage” and “not transported”. Clearly there must be a point where you just say Neil is not transported. Otherwise, when I die, we could say I will live on as some random baby or whatever: any person could be considered to be a transport_with_errors of me if we’re allowing any number of errors to still be a successful transport.

The more I think about this, the more I think it’s a serious problem for the “you are transported” position. Because not only must such a line exist, the location of the line (the value of N errors at which the person is no longer transported at all), is unknowable, even in principle. Positing extra, unknowable facts about the universe makes the hypothesis occam-vulnerable at the very least.

Why? As noted above, it’s the same problem with all of the “heap” paradoxes. How many hairs on his head can a man have and still be a “bald man.” One single lonely hair? That’s bald. Twenty straggling little hairs? A fringe or tonsure? Just a little bald patch at the very top?

Just because there isn’t a bright line doesn’t mean there isn’t a “yes” and a “no.” Fuzzy logic is still logic, and topology works just fine with vaguely-defined boundaries.

I don’t take it as a problem at all for the “you are transported” viewpoint. If anything, I see it as “the fallacy of drawing the line.” (I recognize, of course, the counterpart, "the fallacy of not drawing the line. But I hope I am not engaging in it.)

Actually it was me that brought up the heap problems, in the context of explaining why they were not the same kind of problem at all.

Here, I’ll slightly rephrase what I said then:

Whether moon_neil is similar enough to earth_neil from the POV of his personality, memories etc is absolutely a subjective question, which can have a fuzzy line, and be a heap-style problem, sure.

But what I’m talking about is whether earth_neil, from his own perspective, is going to continue to have conscious experiences following the transport.
It’s a problem for the “you are transported” position, because proponents agree that there’s a point where the answer is “yes” (e.g. it’s a perfect copy: moon_neil is identical to earth_neil), and a point where the answer is “no” (e.g. a newborn baby walks out of the destination pod, having nothing in common with earth_neil other than species).
And inbetween, somewhere, there must be a discrete line because to have a fuzzy line would require at some point “earth_neil continues to have experiences” and “earth_neil does not continue to have experiences” to both be true at the same time. Unlike heap-style classification problems, this is simply P and !P, and must be rejected.

Not at all unexpectedly, I disagree. It is very much like “heap” problems, such as “Who is a bald man?”

I agree there is a line somewhere in the middle. I don’t have any way to draw the line exactly.

That simply isn’t a problem. Some things can be decreed arbitrarily, like letting 18 year old people vote. Others have to fall into the category of “I know it when I see it.”

(I just started to concede that “fuzzy” is wrong, and that I shouldn’t have used that model of set-belonging. But, on second thought, I’m not sure I’m ready to give it up. The “fuzzy set” idea doesn’t actually argue that an element both is and is not a member of the set. It’s a little closer to saying, “There’s a 30% chance it is in the set and a 70% chance it isn’t.” In some fields, you can say that fuzziness entails proportionality of set membership – I’m 96% Democratic, but only 93% Liberal. Even with absolute binaries, like “alive or dead” you can get Schroedinger’s Catnap – is he awake or asleep? – where a wave-form describes probabilities.)

Anyway, sorry… And, I will repeat, I am not disagreeing solely to disagree. I don’t play those kinds of games. I’m disagreeing, 'cause it’s what I happen to believe. After watching as much Star Trek as I have, I’d get into the Transporter without a qualm.

(The disintegration chambers on Eminiar, no way!)

ETA: at a science fiction convention, I once heard a very talented lawyer give a spirited defense of the pseudo-war between Eminiar and Vendikar. It was really haunting: she made it all perfectly reasonable! “You know you’re going to have wars anyway. You always do. We’re just avoiding the destruction of our culture because of it.”

Why must the line exist? I am only more or less the ‘me’ I was yesterday. Even in one body, my consciousness is subject to change; whether severe change can or cannot be thought of as a discontinuation of ‘me’ is:
[ul]
[li]Not something I would be able to judge (because the ‘me’ doing the judging would be the changed one)[/li][li]Perhaps quantifiable in some way by external observers, but only subjectively.[/li][/ul]

There only needs to be a sharp line of definition as to when a consciousness is or is not transported, if you believe that consciousness is like some sort of solid rod that is extruded from the past into the future self, and persists as a solid rod extending from the present back into the past., and that this rod can either be continuous, or be cut.

I believe that rod only exists as a concept - in reality, only one thin slice of it actually exists at any given moment, in the present.

Let me try again to describe this, because this really does look like we’re talking past each other.

A person, John, walks into teleporter pod A. That body is obliterated, but at the same time a person walks out of teleporter pod B.

Now the question is, does John have a future after this process? The “you are transported” position is that the answer is “Yes”, assuming the person walking out of pod B is identical to John.

However, we would all agree there’s a point at which the answer to the question is “no”. For example, if the person who walks out the pod is as different from John as just any random human.

So the problem for the “you are transported” position, is that at some arbitrary point we have to say John does not have a future. And we have no way of setting where that line is, or of knowing whether we’re right.

What Trinopus has suggested is that there does not need to be a discrete line, or it can be subjective or whatever. But this doesn’t make sense here. A fuzzy line means that in the fuzzy area of differences both the statements “John has a future after entering the transporter” and “John does not have a future after entering the transporter” are true at the same time. What would that even mean?
And similarly, saying it’s subjective means essentially “John, whether you will still see, hear, experience anything at all after the transport…is subjective”. Again, what would that mean?