Transporters and the destruction of the self: What am I missing here?

B. The idea being that I believe that the self is the continuous maintenance of the mind, and to me the atomisation and reconfiguration of your physical body, body and brain even being identically faithfully restored, would destroy the mind (as separate from the brain). I don’t think the transporter inherently ‘destroys’ your original body so to speak, but it would destroy your original mind in reducing your body to energy. I can theoretically see how with a pattern buffer you could reconfigure your physical self, but the mind is inconceivable to me.

(Tibby, just a request: Could you please not mess with the typeface settings?)

I don’t want to mess with it, but for some reason, when I click to post a reply, particularly then it includes a quote, the input window on my browser defaults to a large font size (3 or above), and the font type of the quote is different from what I write, so I try to match the fonts and make it all size 2. So, whats the default font here, Arial, Trebuchet, or something else?

Well, just for the moment, let’s take a simpler case of a highly active chemical reaction: open flame. Let’s take a stack of branches and some tinder, and get a nice vigorous campfire going, right on the transporter pad. Now let’s transport it to another pad 1,000 miles away.

Is the fire still burning? Are the flames still leaping? If I film both the departure and the arrival, can I see a significant discontinuity in the pattern of the flames?

I would posit that the “Star Trek style transporter” would successfully transport both the static chemical objects – the logs – but also the energetic reactions – the flames.

And if this is true, then it is my opinion that the transporter would also successfully transport a slower-moving, less energetic chemical reaction, specifically, the human mind.

But…this is because I believe that human consciousness is a chemical reaction, and does not depend on any extra-physical process. If someone else believes in a soul, or spirit, or other non-physical idea, then… Well, all bets are off, because we have no way of knowing.

It seems to be entirely a case of “YMMV.”

I hadn’t thought about it like that, in terms of the energetic reaction of a fire, but I’m still not convinced. It’s definately a perfectly viable position but I suppose it comes down to YMMV as you say.

I mean, as well as chemical reactions in the brain, there are also electrical impulses and neurons firing and things tied up with all that (not an expert on neuroscience, so I’m hazy there) but are these chemical reactions and electrical impulses not acting on something else separate from the physical brain itself? I understand they act within the context of the physical material of the brain, but does the brain not manifest something quit distinct - the mind or self?

I’m not into anything mystical like spirits or souls or anything like that, but I feel as if there is more to the issue of human experience and conciousness than just the physically observable elements. Maybe advances in knowledge about fundamental quantum physics underlying our observable universe may shed some light on this - quantum indeterminacy and quantum states and so forth (though I’m not an expert and perhaps these are already provably invalid applications of theory).

Something seems to tell me, mostly on a non-scientific hunch, that transformation of the body into energy and back to matter, even if identical right down to what you were thinking on the moment of transport is essentially like cloning. The person B would be absolutely indistinguishable to Person A who went in, but nonetheless, Person A and his individual conciousness would have ceased to exist. I imagine this would be impossible to confirm either way, since person B would likely not notice any difference.

I would see it the opposite way.

I think that one needs to believe in a spirit / soul / whatever to believe that we could transport consciousness (apart from the trivial case of walking a brain around).

From a Physicalist POV, what exactly is being transported?
And if there is a difference in the copy, would that count as a successful transport or not?

Why muddy the water with conversion from matter-to-energy, transport, then conversion from energy-to-matter into the equation? That just confuses the main issue.

The issue is really about the applicability and absolutism of physicalism and how it relates to the mind. If two brains are exactly the same in form and structure they should have exactly the same consciousness’s, whether the particles are disassembled, transported and reassembled, or, if the form and structure are simply replicated somewhere else from similar, but not physically the same, constituent building blocks, right?

Neuron activity is, in part, electrical, but, ultimately, all chemical reactions involve the electrons surrounding atoms. The making and breaking of electron bonds is chemistry. Some chemical reactions actually free up enough electrons to produce electrical current – that’s how lead-acid batteries work.

(I don’t want to get too definite here, as I honestly don’t know beans about chem. It’s my second-weakest science – only at biochem am I worse! If I’m wrong, correct me, I crave you!)

But the deeper philosophical question is unanswered, and possibly unanswerable. Is the mind an emergent physical property, like flame or electrical current, or is it something else? And if so…what?

I’m a good old-fashioned materialist, and hold that there are only three things in the cosmos: matter, energy, and information. Since the idealized Star Trek transporter appears to move all three without changing them…I don’t know what “else” there could be for anyone to point to in argument against this.

I have no real objections to an argument from ignorance in this regard; sure, there might be some fourth kind of component to human consciousness. But no one knows what it might be, and thus the speculation remains uncomfortably barren.

I personally believe these are invalid applications of the theory…but they’re held by such persons as Roger Penrose, whose shoes, metaphorically, none of us here are worthy to lace! I think he’s wrong…but that’s a little like a Sicilian assassin calling Plato and Aristotle “Morons.”

Grin! That’s one of the things that’s the most fun about this discussion!

Wouldn’t this apply to the very existence of consciousness in the first place? i.e., wouldn’t the same chain of logic suggest that, without a soul (or whatever) that self-awareness itself cannot exist? Why is transportation a specific challenge?

The matter, the energy, and the information. The entire physical hoo-hah. Every cell, every organ, every neuron, and all the excitation levels of every electron.

And…it depends on how big a difference. As has been noted earlier, there are little differences in your body right “now” from just a microsecond ago.

A Star Trek transporter would have to be remarkably good, to preserve every single chemical bond in every single molecule. Even an error rate of one thousandth of one per cent would result in a…um…god, it would be horrible! It might look like a person, but would probably just be a bubbling, burnt-smelling, very dead mound of protein fragments!

It’s “too much information,” surely, but harmless. Until anyone actually invents such a system, it’s all “handwavium.”

Well, anyway, that’s the viewpoint a handful of us, here, hold. The person who went in is the person who comes out…even in the complicated case when there are two of “him.” But all we can do is assert this, on philosophical grounds.

(It is my opinion that nearly any idea can be defended on philosophical grounds, since this form of reasoning depends largely upon metaphor. If I can find a metaphor that supports materialism, and someone else finds a metaphor that undermines it, how can we go forward from that impasse?)

P.S. I removed your font changes; if that was bad etiquette, I apologize.

But, why can’t the issue of self-awareness be resolved without undermining physicalism/materialism? I think it can. If you have two identical brains, you have two identical minds. That’s not a problem.
And, it’s understood that from the moment of transportation or replication the two brains receive different sensory inputs and therefore continue to diverge into separate individuals. That’s not a problem either, but it’s unnecessary to bring that into the equation while we simply analyze what’s going on between two brains/minds that are exactly alike in form and structure (i.e. before being changed and diverging). We, quite simply, want to know what dynamics exist between the two identical brains—how they relate to each other.

[FONT=Trebuchet MS]Side note: This is typically a stumbling block in these types of discussions, and this is where it gets complicated (and our goal is to keep things simple so we can more easily analyze the situation): some will declare that there is no time that the two brains share exactly the same sensory input. With this in mind (no pun intended), the two brains can never really be the same (different sensory inputs create structurally different brains)…the inference is that it is this that makes them separate consciousnesses. I don’t buy that argument. They are separate consiousnesses not because they’ve diverged with different sensory input, but simply because they are separate from each other in location. They don’t share the same space. One is here and one is there. If it’s theoretically possible to replicate the form and structure of a brain, then it’s equally possible to have two brains change exactly the same (as they would via similar sensory input) over a period of time. It’s just playing with building blocks. [/FONT]

So, with two identical brains that shuffle their building blocks exactly the same over a period of time, what relationship is likely to exist between the two? Will they share one consciousness (some type of neural network) or will they simply have two separate but equal consciousnesses? I believe it’s the latter, and that’s why I won’t travel by way of transporter.

(no, not bad etiquette changing my font. As I mentioned up-thread, since I got a new computer my browser is doing strange things with the fonts I type here…I just want it to look like every one elses. Is the font in this post correct?)

Yes. I do think that most people on thinking about this problem (and ones like it) will come to the conclusion that either consciousness is not continuous ever, or there’s something profound here that we’re missing.

And I would go with the latter. If it were just the intuition that I am the same person as the guy who started this post, I’d be happy to say it’s an illusion and move on. But there’s also phenomena like qualia. I cannot be dismissive of such phenomena while a physicalist account of them appears intractable.

The whole thing is a hypothetical, so nothing should stop us talking about what if the transporter successfully transports my brain but with X differences.

The issue here is that my existence appears either-or: either I’m dead or I’m alive (in whatever form). But the physics appear continuous: there are trillions of errors the transporter could make and still have a functioning brain.
But presumably, if it just made Barack Obama’s brain, you wouldn’t say that I would persist in that brain. So what decides whether I’ve survived or not? Where’s the line drawn?

Also, theoretically, you could postpone divergence to some degree, by putting both persons into an identical environment. With no difference in sensory input, it would require some internal fluctuation for the minds to diverge. I think this would happen – one of them just happens to think about a ham sandwich, while the other just happens to think about a chicken sandwich.

Agreed. The problem is that they are (initially) also the “same” consciousness, at least in my opinion and the opinion of several here. Our language doesn’t work well to describe this.

I can’t imagine any means or mechanism by which they would be in communication. A mind-meld? Simply by dint of being similar to the degree of identity, they would “share” thoughts?

Have you ever said the same thing that someone else said, completely by coincidence. After a long silence, you say, “How about some Mu Shu Pork?” and the one you’re with, at exactly the same time, says, “I could really go for some Mu Shu Pork!”

Even if you really are both having the same thought…how could it possibly be said to be “the same identical thought?”

(See? The language sucks for this stuff!)

Sameness, in the latter sense, requires a pathway of communication. A pipeline.

If, using a duplicator/transporter, you create two identical Jasper Oliveros, and prick one with a pin, would the other one say “Ow!” That would be a shared consciousness!

Alas, when I quoted it and started responding, it contained a couple of font change formats.

Grin! I’m dismissive of qualia because a physicalist account seems intractable! I simply set it aside as operational nonsense.

(“Nonsense” in the non-pejorative scientific sense, I hasten to add. The question, “Is there life elsewhere in the galaxy” is nonsense…without being in any way absurd, foolish, or inane. It’s a damn fine question…just not addressable at this time.)

I’d say it would be an operational line. You take the guy before, and, say, have him memorize a list of names. Afterward, see how many he can remember. When any significant detectable loss of functionality is observed, you know that the copy is degraded.

It might work like a xerox machine: the first copy is pretty good, but, after making a copy-of-a-copy fifty times, you see very significant degradation. In that case, one might still choose to use a transporter of that type in an absolute emergency. Either transport with less-than-perfect accuracy…or burn alive in a forest fire? Oh, hades: okay, I’ll transport out!

Anyway, that question can be addressed scientifically!

I said that I think that qualia is a major unsolved problem of consciousness, and one which does not appear to be tractable from a purely physicalist perspective.

Then you say you can dismiss this problem because it’s nonsense, where “nonsense” is being defined to mean unsolvable at this time.

It seems to me you’re saying the same thing as I am, but just with the addition of a pejorative label that lets you handwave the problem away.

The question isn’t about knowing whether the copy is degraded. It was about whether my consciousness persists or not.

I get into the transporter and I’m killed. At the destination, an identical copy of me is made. Identical apart from X differences. But still a healthy human.
The question is, if my consciousness is really transferred, how does the value of X affect that?

I agree with your agreement. The language to describe self-awareness is a major stumbling block in coming to any sort of either consensus or disagreement on this subject. I still can’t tell whether most of you agree or disagree with my point of view, simply because we may be using different language to describe the same thing, or the same language to describe different things.

That’s why I think it’s much simpler for this type of discussion to separate the term “self-awareness” from the term “consciousness”, even though most people group them together. We can call “consciousness” every mental process that is not self-awareness (e.g. memories, psychology). I take neither credit nor blame for this nomenclature, I’ve observed other theorists of the mind use it similarly. Observing that self-awareness is a higher-order mental process than supervenes on the lower order processes is valid from the materialist view point as I understand it. The reason for the separation is to justify this: there’s no reason memories can’t theoretically be replicated multiple times in multiple places and there would be absolutely no measurable difference between them, nor any relational difference between them. However, this may or may not be true of self-awareness, depending on which sub-set philosophy of materialism/physicalism you adhere to. Yes, I think we can agree that self-awareness can be replicated along with (i.e. emerge from) memories with absolutely no measurable difference (they are all the same valid people with valid self-awareness), but there is a difference relationally. Again, the language fails, but it’s like this: replicated memories are identical and the same; self-awareness can be identical, but not the same. And, that’s really the crux of the transporter question.

So, if we can agree on a common language (like the self-awareness/consciousness duality I described above), then we just have to distill the transporter question down to its bare essential. Questions about qualia and what it’s like to be a bat are very interesting and worthy of their own thread, but I really don’t think they apply to the transporter question. The transporter question is really quite simple, but it can’t be answered with a simple “yes” or “no”…either reply tells us little about your philosophical stance. Example: answer “yes”: does that mean you believe you have a future in both the transported you and the non-transported you; or does it mean that you have a future in neither the transported you and the non-transported you? (Or maybe you’re just weird and believe you have a future in the transported you but not the non-transported you). Answer “no”: does that mean you believe you have a future in the non-transported you but not the transported you; or that you have a future in neither; or that you have a future in both, but just don’t like transporter

That’s why I think the transporter question has to be asked in a multiple choice answer format to be meaningful, like the one I asked at the bottom of post # 240. Semjaazah answered the question, so I (and hopefully we) know his stance pretty clearly now (he chose “B”, like me, so that tells me he’s a pretty smart guy :wink:
Does anyone else care to answer that question?

I took pains to explain why, in science writing, “nonsense” is not pejorative.

Setting a problem aside because it is intractable is not “handwaving.” It’s common sense, and science has been doing it forever. I’m not declaring the problem cannot be solved; only that it doesn’t have any approach right now.

(Unlike, for instance, the poor bloke who declared that the chemical composition of distant stars could never be known. Still, until the advent of spectrography, he was essentially correct.)

This wasn’t clear; you asked to draw the line as to how much degradation was acceptable. I put forward some ideas on how to test this.

The opinion held by many of us here is not only is your consciousness persistent after transportation, but subject to duplication.

I really don’t comprehend this question differs from the one I thought I answered. We test by inquiry, in a kind of Turing Test approach. The same way I would establish your identity if I thought that someone might be impersonating you. I’d ask a bunch of questions, and see what the answers imply.

I think that consciousness is a “sliding scale” affair, not an “on/off” sort of thing. So, your consciousness could be degraded by copying error, but you might still be a conscious person.

Again, we all lose brain cells, all the time. We lose mental capacity, gradually, as we age.

If X is large enough, the copy will behave erratically, or fail to behave in expected ways. It might fail to recognize faces, or not remember data. It might still be fully conscious! The amount of damage needed to destroy consciousness is probably much larger than the amount of damage to produce visible behavioral degradation.

(Very, very drunk persons are still conscious!)

I really don’t get what your question is, if these answers are failing to address it.

Fair enough… In general, a guy could spend hours at a time having very little or no “self-awareness” at all, as, for instance, when rapt in a book or watching a movie. We are pretty much always “conscious” – aware of our environment – but we don’t spend a whole lot of time examining our own mental state.

To reprint the question, which I appear to have overlooked…

My answer is A, although I also sort of agree with C. I’m of the opinion that I’m not the same “me” that I was an hour ago anyway, so “It doesn’t matter” has validity in my world-view. But my practical answer is, yes! Definitely! If I could zap myself to Paris or Rome, do the tourist thing, and come home again, at least as safely as flying in a passenger airliner, and at about the same cost, you bet I would! I’d love it!

(As a confirmed coward, I would also wait for a long, long time before doing it, to make sure that it is safe, that transportees pass the various tests I’ve described, etc. My position on the “trailing edge” of technology is very firm indeed!)

But there’s no reason to apply the label “nonsense” to this at all. What’s it adding?

I was taking issue with you earlier saying we should be dismissive about qualia.
But now, it basically seems like your position on this is the same as mine. You aren’t doing the typical “If we can’t understand it, it must be an illusion” which often happens around consciousness.

Yeah, you haven’t addressed my point, but I haven’t had a chance to really state it clearly.
Also, it doesn’t seem as though you actually agree with some of my implicit premises, so I’m going to take a step back from the “copying errors” scenario.

There are two forms of philosophical identity:

  1. Numerical identity means “one and the same entity”. So Clark Kent is numerically identical to Superman.
  2. Qualitative identity means “all the same properties as”. So two brand new pennies are qualitatively identical (obviously within a tolerance).

Now we can take it as read that the transporter makes an entity that is qualitatively identical to you. It looks like you, it has your memories, etc.
The important questions are: Is the duplicate numerically the same identity as you? and Is that necessary… is my consciousness bound specifically to one (numerical) identity?

My answers to these questions would be “No” and “Probably”. What would your thoughts be? I hope this phrasing is clearer.

It doesn’t add anything, surely; it was an observation. It’s a technical term in the specialized jargon of the philosophy of science. It is the correct word for the point (and, even then, I took care to explain this.)

Well, I hope not… I can be as dim as anyone else, and it’s always entirely possible that I’m missing the point entirely.

I would have to say “no” to the first question, but only because of the way it is phrased. If the definition is “one and the same,” then it can’t be, because there are more than one entities.

I want to say, “yes,” they are the same; I want to challenge the definition. I think it fails to serve our needs in this case. But if the definition contains the word “one,” then, shrug, it can’t be.

(“What’s the difference between a duck’s beak?” “None: it’s both the same!”)

This is why I objected, earlier in the thread, to people arguing from dictionary definitions. I think the definitions, themselves, need to be reworked when “sameness” can be mass-produced.

Because the key argument, to my mind, is this: there is no possible experiment, no observation, no test, no means by which the duplicate can be told apart from the original. They are “the same” because, if I shuffle them around behind a curtain, there is no way you can ever know which is which.

The degree of identity goes beyond mere qualitative sameness. As someone said earlier in the thread, it isn’t just another model of the same car, it’s the same car down to the mileage, the pattern of engine-wear, even the scratches on the paint. It isn’t like two pennies from the same stamp, but, rather, more like two dollars from the same printing press bearing the same serial number.

(Would be worth a lot to a collector!)

My answer to the second question is: there are now two copies of “your consciousness.” Each is bound to one body, but there are two of them. They do not communicate; if you strike one, the other doesn’t say “ow.” But, at least for the first few moments, until divergence begins to occur, they both have the exact same thoughts in their heads. They both have “your” thoughts, but completely unbound and uncoupled from each other. Each of them thinks he is “the original,” and there is no possible falsification of this belief.

I disagree. I think describing an unsolved problem as “nonsense” is at best misleading. The term nonsense normally implies incoherency (and as a formal term, I can only find cites that define it like that).

I doubt anyone would call dark matter nonsense, but I bet it happens often with unsolved problems of the mind, simply because some philosophers and scientists wish to sweep these problems under the carpet.

ok

So you’ve given the same response to the first question, but a different response to the second.

I’ll try to give my reasoning.

If someone were to ask me who or what I am, I’d say that I am Mijin; an adult male, blah blah. But really, “me” is something more specific than that – it’s a particular instance of consciousness. That’s what I identify with. I could lose or change everything else and still I would say I live on.

And, since it’s a particular instance of consciousness we’re talking about, by definition there can’t be multiple copies of it. If you were to make 10 perfect duplicates of me, you’d have 11 instances of consciousness. And if you were then to kill me, it would be irrelevant to me that there are 10 copies out there – this consciousness instance would have come to an end.
And that’s exactly what’s happening in the transporter hypothetical.

I believe it’s from Popper.

It’s often used with regard to String Theory, because so many of the ideas in ST are inherently untestable at this time. (How do you measure a “rolled up” dimension; it’s rolled up! It’s out of our reach, almost by definition!)

I may have to stop using it, which is a shame, as, in context, it is (or I was taught it is) a valid technical term.

Well, only because I disagree – or, more properly, have reservations with – the definitions.

I might propose a third definition of sameness, “Operational Sameness,” defined as “Indistinguishable from another thing by any objective or empirical test.” One water molecule is operationally the same as any other. (Or, if you wish, “…when at the same temperature,” or “at the same level of electron excitation,” etc.)

It’s the phrase “by definition” that’s killing me. I don’t accept it as sufficiently “definitional.” I think that a duplicating/transporting machine requires us to transcend this definition.

I think this is one of those classical impasses. I can’t think of any way to go forward; I don’t have anything new to offer. I hope it is obvious that I respect your position; I just don’t agree with it, and yet I am powerless to rebut it. Your geometry is Euclidean, and mine is non-Euclidean. It would be absurd for either of us to say, “You’re wrong,” but we may say, with full moral integrity, “I don’t agree.”

Anyway…I’ve had fun! This is, at very least, a great “Great Debate!”

In another debate on another site but with the same subject, someone has come up with a thought experiment that has made me rethink my position on this question somewhat.

If the original is ‘killed’ in order to create the copy, and the copy has the same pattern of data as the original, then it seems reasonable enough (to me) to assume that the copy inherits the consciousness of the original, as there is a direct causal link between the two entities.

Others seem to think that there needs to be a continuity of identity in order for the consciousness of the copy to be the same as the consciousness of the original. The copy may have the same pattern as the original, but this is no better than a coincidence, and has no significance for the consciousness of the original.

So far I have been unable to appreciate this position, but the following thought experiment has given me some insight into how some people might prefer it. Imagine an infinite universe, extending far beyond our current Hubble Volume. In such an infinite universe, it is entirely possible (some might say inevitable) that an exact copy of any particular person would exist at some non-infinite distance away from our current location. Such a copy would be identical down to the level of sub-atomic particles to the original, and would experience exactly the same events in the same order.

I note that Max Tegmark estimates that an identical volume to ours should be about 10[sup]10[sup]115[/sup][/sup] metres away from our location; any inhabitant of that volume would be identical to one in our volume. Now imagine that an event occurs in our own volume that does not occur in the distant copy volume; an individual dies (perhaps in some bizarre quantum catbox accident) but continues to live on in that distant location.

Does the consciousness of the individual that dies transfer across such an unimaginably vast gulf of space to continue in that location, a volume that has no causal relation to our own volume but is nevertheless identical (entirely by coincidence). I find it very hard to imagine how that can be the case, although as a patternist this hypothetical transfer should be just as acceptable as any other transfer process.

Of course there is no causal link between events within the two volumes, so if (like me) you are looking for causal linkage as a substitute for continuity then you would be disappointed.

However the possibility of the existence of an undeceased twin far across the universe might offer some solace to the victim of such a localised accident, and of course it may be the case that we are all survivors of peculiar accidents that occur in galaxies far, far away, but did not occur here.