Ah but now we get right to the crux of it, because there is no basis for setting such a minimum. If I say, “No, as long as the guy on the moon is > 98% the same, then John now gets to see the moon”, how can you prove me wrong?
Remember we’re not talking about the question of how similar a person must be to still be considered Mijin; that’s “Question_A” as I labelled it a moment ago.
We’re talking about Question_B. We’re asking “Assuming consciousnesses can be transported, how similar do two brains have to be for such a transport to happen?” and not only do we have no basis for setting such a point, we have no way to ever tell if we were right.
And I reiterate that, wherever it is, it must be a hard line. So say the line is at 99.99%. If the transporter creates a person on the moon 99.99% the same as you, you now are seeing the moon. But if the person on the moon were 99.9899999% the same; that’s it, you’re dead, that’s a completely separate person.
Otherwise, what are we saying? That at 99.899999% you both do and do not transport? Wherever you set the line, there must be a binary flip either side of it.
Only partly - is the point I am making. Only partly alive in the sense of continuity of self.
Again, the notion of ‘fully alive but impaired’ implies that the homunculus is fine, just in a damaged vehicle, and you said you weren’t making that argument.
In a scenario where a transporter/duplicator exists, then both can be true, because it posits the unique facility for one process to continue as two processes; one, both or neither of these can then subsequently cease.
OK, how does that work? How does a non-material, non-measurable property of ‘having been’, exert any influence on what things are currently? How could it?
Into what material property of an individual is the past encoded?
But in that case, you run into Cartesian doubt and solipsism. How do I know you’re really conscious? We have no way ever to tell. Everyone you meet could be a philosophical zombie.
What we do have, in most Transporter scenarios, is the ability to ask the subject, to quiz him, to engage him in, essentially, the Turing Test of transported identity. That’s the most we’re ever going to have.
If the guy says, “I’m Val Kilmer, dammit,” what basis do you have for doubting it?
(Assuming it was not, say, Ernest Borgnine who went into the apparatus.)
(Frankly, I don’t see the difference between Question A and Question B. The two, as far as I can see, are the same question.)
You keep saying this…and I don’t disagree, as it makes no difference to my viewpoint. But… Why? Why must it be a “hard line?” Exactly how fine do you insist this cheese be sliced?
99.99 is okay, but 99.989999999 is absolutely fatal, and the person is dead as Oliver Hardy? Why? Again, yes, this runs the risk of the fallacy of not drawing the line, but I think you are committing the fallacy of drawing the line.
You keep insisting this is not a “heap” problem, and, for the life of me (and in a Transporter situation this may be literally the case) I can’t see why.
Even if I agree… Okay. So what? Let’s hope the machine is fully 99.99% accurate. We’ll probably demand it be even more accurate, to leave a margin for safety. Now… Great! We’ve got us a Transporter, and going to the Moon is as easy as taking an elevator to the third floor of an office building. Win/Win!
(That is the only point I came into this thread with. Is it now granted?)
Let’s say that, later, we find, to our horror, that one of our units has fallen to 99.987% accuracy, and there are a lot of victims out there who are not “really” the people they think they are. We have to hold that their originals are “dead” and that they are “other people entirely.”
What do you do? Line 'em up against a wall and shoot 'em? Imprison 'em? Put them in long term care, so they can be re-trained to be “new” people? Remember, as we get very, very close to the “bright line” you define, the resemblance can be very, very close to the original person. At 99.989999, the subject is certainly going to think he’s the original, and will have almost all of his memories. He’ll have a personal identity that is remarkably close to the original.
What’s lost in saying, “We’ll let you continue in the original’s place, with some safeguards, and, also, the Transporter company owes you a huge settlement for damages?”
I’m saying you can come close to transporting – arbitrarily close – even in failure. Ultimately, so close that no difference can be measured.
And… None of this is in any way important to me! My original point, which Tibby or not Tibby says he now groks, even though he doesn’t personally believe in it, is simply that in a successful Transportation/Duplication, both subjects are the “real person.”
Conjecture on the nature of failed Transportation/Duplication, while fascinating, is not what I’m here for. I’m engaging with you on it…but it ain’t my deal.
I don’t know what this means. Either I’m still experiencing something, or I am not.
How does it imply that? Without hesitation we say we’re fully alive right now even though our experience changes moment to moment, even though all of our brains become damaged through age. Does that imply a homunculus?
I’m asking for the terms under which this process continues, and I feel you are avoiding such questions.
Well, like I say, there’s a difference in extrinsic qualities for one; I am the entity at x=1, y=1, z=1. Make an entity at x=2000, y=1, z=1, that’s not me. Swap our positions and I’m now the one at x=2000. Again, this is perfectly consistent with how we normally enumerate and separate objects. To do otherwise is to treat consciousness as a special case.
Sure, but all of that is answering Question_A and telling us nothing about Question_B.
This is why I said the position of this line is unknowable.
In the post where I stated Question_A and Question_B I very clearly stated why those questions were not the same. Indeed, that was the whole point of that post.
Because what is in between “You died” and “You transported, and are now seeing the destination”?
Sure both statements can be false, but how can they both be true?
Once again:
Heap problems are classification problems. In the case of the balding man, maybe 10,000 hairs looks like balding to you, but not to me. But it’s entirely arbitrary, so there is no right or wrong (although more people might agree with me or you).
The statements “You died” and “You transported, and are now seeing the moon surface” are mutually exclusive. At most one of these is true. And which of these is true is not a matter of someone drawing an arbitrary line.
Like I said, we don’t normally like to posit extra rules about the universe just to defend a hypothesis. It’s the kind of thing that leans us away from a hypothesis.
First the strawman: I don’t say kill the other person any more than I say kill Val Kilmer because he isn’t me.
Secondly, on the basis of someone continuing on my life: sure, whatever.
Obviously if we’re only discussing successful transportations then we have nothing to talk about, since under bodily-continuity hypothesis, there is never a successful transport, even if N errors = 0.
Although I should say at my point my position is not quite BC. I still say currently the “consciousness never continues” position is the least problematic, but I’d put BC over “you are transported” because at least it doesn’t have problems such as the arbitrary line.
I’m not sure why. What makes me think I’m me is the continuity of memory. If I suddenly dissapear and an identical copy is recreated one billion years later, this copy will feel he’s me exactly in the same way and for the same reasons why I think I’m the same me I was one second ago. I’m not sure the hiatus matters.
Besides, it seems to me that there might be a “Planck time”, an indivisible minimal unit of time. If this is the case and time is discrete, if you’re right, then there’s no continuity, and you “die” every extremely small fraction of a second to be replaced by an identical copy that isn’t you.
What makes you think there’s an objective reality (the copy is or isn’t you) independant of your opinion and the opinion of other observers?
Unless you’re positing a soul or something equivalent, the objective reality is that the copy is identical (or 99.99999% identical, whatever…), deciding whether it’s really you or not is purely a matter of opinion.
It’s not that there’s no way to tell as in “there’s a blue ball in this box and I can’t tell whether the ball is light blue or deep blue because the box is closed”, but as in “there’s a ball in this box reflecting some wavelenghts of light, and deciding whether this light qualifies as light blue or as deep blue is a matter of opinion”.
Well it matters to me if I get to open my eyes in a billion years, or I’m dead and that’s just another person, as separate from me as I am to you.
Like I say, there’s a third-person question here and a first-person question. Your position may be that those two things are automatically the same. I don’t see any reason to assume that however.
That’s essentially the “no continuity of consciousness” position, yes.
I don’t personally subscribe to that position, but feel it’s the one that best stands up to scrutiny at this time.
Normal consciousness makes me think that. It’s a weird phenomenon, but we’re comfortable with the fact that I will die one day, that my consciousness is separate from yours, etc.
If you were to make a copy of me, unless we have some kind of shared consciousness, I would take him to be a separate entity to me, and if I were to die, I’d no more wake up as him than I would as clairobscur.
With your answers above, you appear to be saying that identical memories alone do not make subjective you subjective “you”. If 2 or more computers (or people) share the exact same memories, that alone does not mean you will continue in the existence of the other. On this we agree, explicitly.
But, then you go on to say (paraphrase): if the 2 are absolutely identical, for all you know, you will continue to exist in the other.
So, if it’s not the exact same memories that allow you to continue to exist in another, what is it in the absolutely identical machines that does allow you to exist in the other? It must be something besides memories, correct?
I agree that there is something besides memories involved in “stream of consciousness” / “your inner movie.” That something is, I believe, the processing and integration of data, including memories. In the computer it’s the function of the processor and operating system. In a brain, it’s the function of qualia and self-awareness.
Where we appear to differ is in our interpretation of the locality of memories, processing and integration. I believe memories are an objective non-local phenomenon, while processing and integration (or qualia and self-awareness) are subjective local phenomena (tied to one brain or machine).
The Turing Test will be passed by both the transported person and the perfect clone, whether they are subjectively continuous with the original, or not. The T Test is meaningless in this experiment (it will show only that a conscious mind or machine is referencing [non-local] human memories); only the Time Machine Test will tell us the answer.
Even if the subatomic continuity of your brain exists as discreet plank seconds (which I don’t necessarily agree with) why do you believe this would break the subjective electrochemical experience of consciousness? What turns this into a non-local phenomenon? Do you believe local phenomena simply don’t exist in our universe?
I do believe consciousness is more complicated than simply reducing it to a function of our known laws of physics. I’m not implying something of a supernatural phenomenon, just something science has not discovered yet.
I think David Chalmers is on the right track when he posits that consciousness may be a fundamental law, not to be explained by other fundamental laws (space, time, mass…). He also extends this to universality and posits the possibility of panpsychism, and I agree with him that that is less likely than consciousness simply being fundamental. TED video (18-minutes, but worth watching).
I get that you think this important, but I am not sure why.
I have a thought experiment I would like you to try:
You step into a booth that instantaneously disassembles you at the atomic level.
I think we all agree that if we leave the story there, you’re dead by all of our definitions.
However, this machine is capable of reassembling you, seconds or hours later, using the same atoms, into the exact same configuration, the whole material system of you is restored with whatever ongoing activities that were interrupted, (physical movement e.g. heartbeat, limbs, blood, peristalsis; electrochemical transmission of neural signals; chemical processes within your cells etc) all resumed as if nothing had ever happened.
Is the resumed thing you? It’s the same matter, in the same place, doing the same thing as it would have been doing if the booth had been a simple cardboard box. It’s not ‘extrinsically’ a duplicate at all.
There is some sort of property of ‘having been suspended’ - but I don’t think it has any persistent, non-imaginary effect. What say you?
This is a good point.
The hypothetical of taking the same atoms, very briefly separating them and bringing them together, and what would happen, is a known difficulty with the bodily continuity hypothesis.
Just as the imperfect copying machine presents problems for psychological continuity hypothesis, as I’ve been trying to explain.
But yeah, this is exactly what I had in mind for why the “no continuity of consciousness” position is the strongest hypothesis right now; neither problem applies to it.
Both instances are “you.” They both have the same sensations, beliefs, Personal Identity, Qualia, etc. If I swap 'em behind a curtain, you can’t tell which is which. The “extrinsic property” is evanescent. You’re talking magic, like a homeopathic chemist with water that once had salt dissolved in it.
I don’t believe they are separate questions. I can’t possibly address them that way.
I never claimed they’re both true. You’ve said that a heap of times, but I haven’t.
And this is simply a belief you hold, for whatever reason, that you cannot defend logically, without appealing to magical thinking regarding extrinsic (not real) properties of objects. The BC hypothesis is something people have made up for the purposes of argument; it has no reality in physics.
One key difference is that I believe both the memories and the processing and integration are Transportable/Duplicable. I see no reason, if you duplicate both the memories and the brain/body that houses and activates them, you don’t have two identical persons.
I still don’t understand your Time Machine Test, I’m very sorry to say.
If one can object to the identity of objects at (0,0,0) and (2000,0,0) then, in four dimensions, I suppose one can as easily object to the identity of objects at (0,0,0,0) and (0,0,0,2000) (using X,Y,Z and Time coordinates.)
If “bodily continuity” is the key, then continuity across time could be violated just the way continuity across space is.
(Most of us don’t worry too much about that, having gone to sleep last night and awakened this morning.)
Or you are not entirely you anymore, and are experiencing something.
If you are still entirely you, ‘inside’, even after losing parts of your identity or personality, then it starts to sound like the ‘inside’ thing is the real you, and is somehow remaining intact and whole regardless of the damage.
I don’t feel like I can have been any clearer. I am a process of my brain that has a single thread; a single future and past, but this is a constraint of the simple fact that my body isn’t capable of diverging into two threads.
Once we posit the existence of a perfect duplicator, the possibility of being a single entity with two independent, parallel, divergent futures arises. Thus, I could rightly say that I am alive now, and (say) one of my futures ends in imminent death; from my current perspective, I will expect to live or die, when in actuality, one of each outcome would happen to one of each of my futures.
I as a person will only experience the sensation of staying in a single thread, but there will be two instances of this event in the universe - each of those individuals will separately, but with equal validity, experience the sense of continuing to be me.
Re-reading this post, I actually realised that I can’t tell whether you’re conceding any part of the argument, or declaring yourself the winner, or why.
Please could you explain your view of the disassembly/reassembly thought experiment in straightforward terms of what, how and why?