I Gotta Split. Paradox?

Correction: one’s memories. In senile dementia, the atomic exchange between atoms of different isotopes or elements is such that the memories and cognitive modules are corrupted just as I might achieve by knocking a magnet against your PC’s hard drive. The arrangement is changing (just as the arrangement changes when you die and decay).

But metabolic and respiration processes mean that the vast majority of atoms within the volume of the skull are exchanged.

These being inactive molecules which just sit there.

[quote]
I have my reasons for believing that if there is a vessel for identity, neuronal DNA makes a good candidate and I have shared some of those reasons with you.
[/quoteNo, you haven’t. You have kept repeating that you like this idea more, but I can’t understand why since it violates Ockham’s Razor so.

WHY? Why not simply discard such an unnecessary entity, like garden faeries?

All of these entities are individuals who think they’re me. I am an individual which thinks it’s me. My position is that the ‘thinking it’s me’ is all that matters. If, somewhere, an individual exists which thinks it’s me because it shares all of my memories, personality and preferences, then I am that individual.

[quote]
Basically, my position is: it makes no difference to me on earth if my duplicate is assembled on another planet or not. I will not be aware of him,

[quote]
Agreed

Agreed

Disagreed.

WHY NOT? “It just doesn’t”, rather like your least favourite food doesn’t, or what?

I honestly, genuinely, do not know how else to say to you what I am saying to you. “I” am not an object but a process, an arrangement. Arrangements sometimes seem to violate physical laws, but that is only due to a myopic and erroneous interpretation of them.

I would like to consider this, TibbyCat, but I simply don’t believe you are listening, or reading things fully. Why do you seek to introduce extra entities which cognitive science considers unnecessary? What, in your view, is the difference between a conscious and unconscious brain given that they are the same atoms? And just to set my mind at rest, please give the post number on which I’d clearly changed my mind in the Schwarzenegger thread.

I’ve just read the thread again, and try as I might I can’t think of how else I might set forth my position. I’ll just repeat the same links in the hope that you might find time to actually read them.

Ockham’s Razor.
A thread in which I changed my mind, from your current position to my current position
A thread explaining memory as a physical arrangement.
The Computational Theory of Mind.
Physicalism (my position).
Panpsychism (the IMO ludicrous position which yours sounds like.)
The Church Turing thesis.
Cognitive science.

Again I say: [ul][li]Your explanation of the conscious mind still needs our entity (ie. that of a temporal arrangement, which explains the difference between a live/conscious set of atoms and a dead/unconscious set of atoms which are the same atoms). Our explanation does not need your entity (ie. a set of atoms which must stay permanently unexchanged if we are not to become automata).[/ul]Our position is more parsimonsious than yours. You are proposing an additional entity which ours doesn’t need.[/li]
Please.

Why?

What extra entity? The atoms in neuronal DNA are already there; I’m not including anything extra. Something that is already in the brain, and permanent for the duration of consciousness, and apparently of some special evolutionary significance, is a good candidate to be the anchor of your POV (conscious identity). The anchor is needed IMHO because it disallows what I believe to be an illogicality – the POV of one sentient being having multiple futures. If any position is more parsimonious it is mine: despite protestations to the contrary, you are adding some sort of a link from the original to the duplicate at some time around the period before the duplicate’s future diverges. There is no other way that “you” can have a future in the duplicate (although, I would still like to hear you explanation of how you can have a future in something without a link). My question to you is: why not remove that link and accept that you don’t have a future in your duplicate? I believe that my position is the one that survives the cut of Occam’s Razor.

No need to shout, Mangetout, I’m not hard of hearing. Sure, you can attribute any loss to a “breakdown of function”. But analyze it a little more closely. Why specifically does identity degenerate in lockstep with the degeneration of neurons? Many other conscious processes are not lost in mirrored fashion to neuronal degeneration. If I am oriented x 3 right now, I can assure you that with enough neuronal loss I will be oriented x 0 in the future. And yet, many of my other conscious processes will still be functional, some hardly degenerating at all. Doesn’t that strike you as being a tad coincidental that my hypothesis (brain – matter = loss of identity) is one thing that is demonstrably evident in a real-life situation? You can lose a lot of brain matter and still function enough to sustain life – robot-like-, but you just can’t hold on to your identity.

I went back to the physics forum that I cited a couple of days ago and found another fine reply to my posted questions (s). I e-mailed the author, Anssi Hyytiäinen, and asked if I could post his reply in this thread. He wrote back:

I hope that Anssi will join this forum, his insights strike me as being quite profound and his opinions will be welcome here. I believe that he is a cognitive scientist, but we may have to ask him that question if and when he pays us a visit. His reply in it’s entirety is as follows:

Anssi seems to take the same position that you adhere to and I agree with up to the point of our main concern (does your POV get transferred to your duplicate). Anssi is obviously in my camp on that issue. I hope that Anssi joins us because my fingers are getting tired explaining my position all by myself (at least you guys get to split the effort). :slight_smile:

You are saying that their permanence is necessary in addition to the neuronal arrangement. I still don’t know why you believe there to be an illogicality any more than in the initiation of other arrangements.

Yes: the “link” is the physical process of duplication, however that occurs.

The “link” is the physical process of duplication, however that occurs.

Because then there would be no duplicate.

I disagree. He is trying to tell you precisely what we have for 6 pages. Please, read the links.

I think that we may be using different interpretations of the word, “link”. With regard to this debate, I am using it to mean an actual nuts and bolts physical link, the kind that does not exist between two objects separated by deep space as in the alien planet reassembly scenario.

Why not? It would simply be a duplicate with it’s own unique POV- the kind that you would not kill yourself for.

This seems to be a major sticking point, so allow me to elaborate on my point. First of all, it is better to consider the percentage of original atoms that remain throughout the life of the brain as opposed to how many atoms are exchanged/remain. Why? Because the great majority of atoms that exchange during metabolism as food/waste are the **same **atoms exchanging over and over again. (Example: at time=x, 1,000,000 units are exchanged vs. 1,000 units remain unexchanged / however 70% of the original units remain unexchanged). Secondly, the atoms that do get exchanged are not the type of atoms that one would expect to be significant. Stand next to a pile of all the food and waste that you exchange during a lifetime and you will seem rather small by comparison, but you are arguably more significant than that messy pile. My point: The percentage of original brain atoms that remain unexchanged is significant, and those atoms that do remain unexchanged are those that are of greater importance (i.e.DNA).

I strongly disagree with your disagreement. If you have indeed re-read this thread, you should recall that I agreed with the majority of your and MT’s philosophy of consciousness - I said that many times in many ways. Most of what you said seemed perfectly logical and, with a few tweaks here and there, as good a candidate as any for the correct model of consciousness. (Of course, I don’t really see any illogicality in the solipsistic point of view either, but it does not strike me as being as believable a model candidate. Besides, that means that I’ve been arguing against myself during this whole thread – that’s counterproductive). I believe that the cognitive scientist community at large embraces most of your stated position, and that’s just fine. It’s actually been quite a few years since I’ve indulged in musings and reading about theories of consciousness, philosophy, theoretical physics and other topics in that vein (I’m more into history, anthropology and classical music at the moment), so having you bring me up to speed on the cognitive science scene was not unwelcome. BUT, it was that one narrow focused aspect of your position that I did not and do not agree with – the transfer of POV into a duplicate. I believe, despite your not liking my choice of the term, “POV”, we all now understand it as I mean it to be understood. To recapitulate:
What we agree on: I have my POV today and I will have my POV when I wake up tomorrow. No one else in the real world can ever have my POV, not my parents, not my kids, not even my conjoined twin.
What we do not agree on: In the hypothetical world of splones and duplicates, my position is that they too will never have my POV – each brain has a unique POV. Your position is that your splone or duplicate will at some point at least have “your” POV. This is the reason that you believe that “you” have a future in your duplicate. It is the reason that you and Mangetout answered the “black socks” and all the other thought experiment questions the way that you did. If you believed as I do (that POV is non-transferable under any circumstance), you would have answered those questions as I did (i.e. “I would never kill myself so those other guys can get rich”). With regard to POV (my only real concern), it is apparent to me that Anssi is in my camp. The following snippets clearly demonstrate that fact:

I’m not saying that Anssi’s take on the matter should be taken as gospel any more than yours, mine or anyone else’s. You’ve never met him, and for all you know he could be a 13yo reform school dropout. I can only judge him by his reply to me, and my judgment is that he has a philosophy of consciousness that is worth listening to. I’m not sure, but I surmise that he is a cognitive scientist and that is why what he is saying should ring true to you, since you embrace the cognitive science philosophy of mind. It rings mostly true to me too. The point of departure concerns the transferability of POV: you and Mangetout take one viewpoint; Anssi and I take the other viewpoint. It is the one main thing that we have been arguing about for many days.
I hypothesize that your position on the transferability of POV may not be the same position taken by the cognitive scientific community at large. It would be interesting to learn if that is the case. I would also like to learn the reasons **why **Anssi believes that POV is non-transferable (his reasons may be completely different from mine). I would like to pick his mind (figuratively, not literally :smiley: ). Hopefully this thread will continue on until we learn even more about consciousness.

All this demonstrates is that ‘identity’ is not a particularly durable thing. Either way, you’re still avoiding my primary objection to your thesis, which is this:
-Components of the brain that are performing any active role are subject to exchange of their component atoms.
-Components of the brain that re not subject to exchange of atoms are thus because they are not performing any active role.

So the bits that aren’t changing, aren’t changing because at that time, they’re not doing anything.

Please describe the interface between your alleged ‘identity stored in atoms of neuronal DNA’ and the normal electrochemical processes of the brain - how does the identity actually move the brain, in your model?

I really do not see a problem with the neuronal DNA atoms being inactive. I consider POV to be associated with, but separate from the process of consciousness. While the electro-chemical processes of consciousness necessitate active cells/molecules/atoms, the static nature of POV (think of it as that which anchors the process of consciousness to the brain) does not necessitate active atoms.

I have seen “teeth” mentioned a couple of times in this thread, so let’s use that to illustrate a non-regenerating entity in another active biological process. The process of digestion commences within your alimentary canal. It is an organ that consists mostly of fast regenerating cells within which complex processes take place. At the proximal end of the alimentary canal are your teeth. Though inactive and non-regenerating themselves, they are involved in the very first active step in the process of digestion: mastication.
Digestion/teeth ; Consciousness/DNA

In my model, identity (POV) is not the process of consciousness, and it is not something that moves the process of consciousness, it is simply the personalization of the process of consciousness. It is that which holds your consciousness to your brain and makes it a personal affair – allowing you to perceive the world through the five senses of your unique brain and giving you a unique POV.
As far as what the interface looks like, I hold the appearance to be self-evident. The electro-chemical processes of the mind simply proceed normally within the physical matrix of the brain, part of which consists of non-regenerating DNA molecules. It simply is as it is, nothing more.

Here is a quite flawed, but perhaps helpful analogy: Imagine the process of consciousness as being a cloud of swirling metallic particles that floats in the air, not unlike a baitball of sardines corralled by yellow-fin tuna. The cloud is transferable and able to be duplicated. Extra-corporeally the cloud has no POV; it has no sensory input; it has no sense of self. Now, puff that cloud over toward your duplicate’s skull, and the hypothetical magnetic atomic balls of his neuronal DNA draws the cloud in through the ears and anchors it in the brain. Now the cloud of consciousness is personalized. It perceives the outside world and the duplicate’s orientation to that world through five unique senses. Draw that same cloud in through the ears of a different duplicate and it will be anchored to, perceived by and oriented toward this similar but unique individual.
Fish is brain food :slight_smile:

Then please describe the process by which POV makes any difference to the living person - how does it interface with the consciousness if it is inactive?

Silly comparison; although tooth enamel is more or less biologically inert, the teeth perform a conspicuous mechanical role in the processing of food. Please describe how DNA that is packed away in the nucleus and not interacting with the rest of the brain can perform the function of consciousness.

Which is the downfall of the whole idea; you’re just not thinking this through - you’ve already acknowledged that the neuronal DNA is not involved in these electrochemical processes, because it is inert and unchanging.

That’s lovely, another tangential analogy. There’s a HUGE and significant flaw in the analogy though; the components of the bait ball are ACTIVE in the system. This is simply not true for nuclear DNA in your neurons; or in those cases where it is true that the DNA is active, then it is subject to change in its chemical composition.

You can’t escape from this; if a component of the brain is electrochemically inactive, then it can stay the same, but do nothing except occupy space; if it is electrochemically active, then it can perform a function, but it cannot remain unchanged.

If there was not at some point some physical process which “read the arrangement”, how did the alien planet know what to duplicate? There must have been some physical ‘reading’ followed by a physical transmission of that arrangement: that’s what I call an actual nuts and bolts physical link since “physical” describes both particles and waves.

Because no duplicating process would have occurred.

That makes no sense - you are arbitrarily clutching at straw statistics here.

But the waste atoms are the ones involved in consciousness, while the DNA were merely the ones involved in building the original apparatus. Your duplicating machine takes the place of the DNA, and in fact renders the DNA utterly redundant.

Because they have/are different memories.

I agree that each conscious brain has a unique POV, because it will comprise different memories.

Not quite: I am defining the word “you” to mean “the person having X memories”. If two people have the same X memories, they are both “you”. After accruing some different memories, the original and duplicate are different people but each just as much “you”. Indeed, as your friend says, “the old you” arguably disappears whenever a new memory is stored in that arrangement of largely ever-changing atoms.

Anssi is chopping and changing his working definition of “you” throught his reply - some confusion is bound to result on your part. That quote is in direct contradiction with this one:

How can “you” split and stay? This is the key to my position. Under my definition of “you”, there are two “yous”. Under your definition of “you”, there are arguably no “yous” after new memories have been formed and new atoms largely exchanged in both of the separate individuals.

And it is a new one which gets created every morning, in both individuals. See?

I think this has come down to a simple diagram and the choice of descriptors for it. If the “original you” is a horizontal line (-----------), then at the point of duplication (*), your position is that the “new you” forms a T-junction:



                  |
                  |
------------------*----

…whereas Mange and I suggest an equal bifurcation:



                      /
                    /
------------------*
                    \
                      \

I think it’s pretty much become a semantic merry-go-round over the word “you”. I’ll be stepping off soon, I think.

I don’t know why you and MT can’t conceptualize that something active can rely on something inactive in order to express itself fully. I’ve given examples and analogies that you simply do not equate to consciousness. Evolution for some reason deems DNA to be important. Evolution is ultimately responsible for the development of consciousness. I think that it is not illogical to assume that the bottom line reason for the development of consciousness is to protect that which is deemed important – DNA. Consciousness should revolve around DNA; it should be identified by DNA. One is the protector, the other the protected. One is active, the other inactive (for the time being)…so what?

That’s simply ludicrous. Just take it one step further. If your argument is going to stand up it has to do so under all hypothetically possible conditions. Let’s say that the omniscient reassembler used a blueprint, or any variation thereof, to create or recreate 2 beings with accrued memories at a particular point in time separated in space. (If you still maintain that a physical link must be present during any act of duplication, then at least consider that it must be a link hypothetically able to transfer POV). Are those two beings at any time (except in the past) going to have the same POV? If they don’t, there is no possible way for either to have a future in the other. If you say that they will, then you either do not understand POV the way that I do (and other outsiders that I’ve found do), or you are not thinking it through properly.

Do you not understand “example” or the point? I thought that I was making a simple point as simply as I could, but apparently not simple enough. “Example”: meaning that the numbers/units do not apply to the brain whatsoever, it’s just an…example. The point: if you concentrate on total numbers, “exchanged” looks more significant than “unexchanged”; if you concentrate on percentage changed over time, the “unexchanged” is clearly more significant. Percentage changed is the better way to gauge our original brain atoms. This was to address your earlier claim that the unexchanged atoms in the brain are insignificant.

.
You imply otherwise. If the duplicate has a unique POV and from the moment of it’s creation, always had a unique POV, then the original will not have a future in the duplicate (and your answers would be different in the thought experiments). If you instead mean that the duplicate now has a unique POV, but at some point (most likely at the moment of creation) had the original’s POV, then I have a problem with that on two fronts:

  1. It implies, at least for the moment in question, that two beings had the same POV. I find that to be an illogical proposition.
  2. It implies that the duplicate had one POV and then switched to a different POV. That proposition is not only illogical, but certainly less “parsimonious” than my position: the duplicate always had a unique POV.

What I don’t really understand is why you resist considering that duplicates always have unique POVs and that you may not have two or more futures. It is just a matter of moving the marker a little bit to the left. I do not see how that would violate the essence of the physicalist/ cognitive scientific view of consciousness and it really is the less bizarre and less complicated theory.

I think analyzing this sentence of Anssi’s is in order. The most important two words in this sentence are, “having been”. You could argue that the duplicate at one time had your POV (I still view it as illusionary because the senses that perceived the world that formed the original memories were not from the biology or POV of the duplicate, but from the original. But, this is merely a semantic difference), but only in the past tense. At any point in time that a duplicate is made, the original POV is only in the memories/past tense/history/before now, not in the present, and not one nanosecond into the future. How can you have a future in something that had your POV only in the past? Think about it. You would be staking your future life on someone who only remembers being you. Resolve this by moving the marker a hair to the left.

The question still boils down to whether or not the original “you” has a future in any other “you”. I presented my question to Anssi in the context of my original “splone” model (one original undergoing mitosis-splitting-and forming 2 splones). I don’t believe that Anssi was using the term, “split” to mean anything other than the physical split of the original and I do not believe that he would have used that term when referring to the duplicating machine or the Star Trek transporter. Remember, my position on the splones is that the original POV was destroyed when the original was killed by being ripped in two during the act of mitosis. The two halves of his “dead body”, in essence, were used in the formation of the 2 splones, both of which have their own unique POVs and in neither does the original have a future.

‘Inactive’ means ‘not doing anything’ - what am I missing?

The problem is that you appear to think your endless analogies prove something - they don’t - for the most part, they don’t even make very much sense.

Well, duh. DNA is the chief mechanism by which heredity occurs; evolution requires heredity, absolutely, otherwise it isn’t evolution, it’s something else - by definition.

Evolution is also responsible for the development of, say, spoken language, religion and (less directly) electric scooters. So what? None of this means that the platonic essence of these results is somehow embodied in the DNA itself.

DNA is the mechanism by which organisms build themselves; this is true both of sapient organisms and true of the bacteria living between the toes of the sapient organisms. DNA happens to be the mechanism by which sapient organisms build the apparatus by which they are sapient, but as you say… so what? DNA ultimately does all of these things, at a very low level; why do you specifically pick out consciousness as a process in which it has high-level involvement?

That’s like saying that the functioning of your house depends on the builder staying alive. Once he’s built it, he’s irrelevant.

How did the blueprint get made without employing some physical process to do so?

And that is what I am saying is an arbitrary and irrelevant focus.

No, I disagree. Read my answers again.

WHY? “It just is”???

Agreed, just like I am a duplicate which had “my” POV in the past tense.

Because that is what I, and your cognitive scientist friend, think that consciousness and POV are: a sorting of sensory input into memory, from present to past.

Like, say, me, this morning?

I could just as easily agree with this, so long as we agree that there was not really a future in the original either, since new memories were being formed in the atoms which were largely continuously exchanging.

Ok, I browsed through the thread now. Didn’t read quite all of it, but I think I have a decent overview of the discussion.

First of all Tibby, don’t feel bad for people acting the way they did in the physics forum. I don’t frequent there because it’s infested by trolls and morons :slight_smile:

Yes, my reasons are completely different. But before we go there, looks like some words about self-consciousness are in order.

Looks like your intuition drove you to explain consciousness as a very low-level physical process, somehow tied to our brain processes, which feed the sensory data to this tangible “self-being”. There are people who have a similar view and try to explain consciousness through quantum mechanics.

However, there is no indication apart from our intuition to tell us this to be the case. And furthermore, as we peel down to the lower abstraction levels of physical phenomena, the “components of nature” get incresingly simpler. The actual method of how this low-level “matter” springs mind always goes completely unexplained. In fact, “consciousness is a property of matter!” is the closest thing to an “explanation” I’ve ever heard from a proponent of this idea :slight_smile:

But then there are some very good reasons to believe just the opposite is true; that it is the very highest level of our learning processes that support our self-consciousness. Or “POV”, or feeling of existence, or “qualia” as some people like to call it (usually people who try to use it to proove our “self” is a tangible thing, not realizing qualia is just a fancy word for any conscious experience)

So, what is mind?

One way to put it is, that this is a problem of “predicting the motion of the mind from the underlying substrate”. If we make an assumption that the laws of nature are indeed explicit, there is nothing there that would explain how we are in control of ourself and our thoughts. Except that there is!

It is this ostensible conflict between explicit laws of nature and our subjective experience of self-control, that is the key point in understanding just what we are. After all the blood-shedding over semantical issues has subsided, it is here where the views finally separate. This is why I find it imperative to explain how does an explicit process become self-conscious.

A short answer;

Because of evolution, the animal branch of survival machines adopted such a survival tactic as to use nervous system to react rapidly to dangers in a dynamic environment. A sophistication of such nervous systems enables animals to observe the surroundings, and draw simple assumptions as to how the world around them works, and thus predict the unfolding of a potentially lethal situation (by interpretating the environment “correctly”), and dodge dangers before it is too late. Brains are basically learning machines, because evolutionary learning(/adaptation) within an organims is much more efficient than the “learning” which happens through DNA.

In humans then, this has gone so far that as we live and learn for a few years, finally we simply make a semantical assumption “I exist”, and thus we become to interpretate the information of the world around us (through our senses) as something that is “happening to me”

Mind you, all learning like this is completely semantical. For a baby, all the information that is coming through the sensory systems is completely alien at first. It bears no meaning as there is no information to associate it with. We are forced to build a web of association - a worldview - which is not sitting on any solid base at all. Meanings, associations, self-supporting circles of beliefs is all what it is.

A better (longer) answer I wrote some time ago is here:
http://tinyurl.com/9mjrc

I would write some things little bit differently from there now, but I hope you can get over it :slight_smile: (One thing that usually people find false is the example of rock rolling down the hill and an animal avoiding it. They think it is instictive to avoid falling rocks. But the matter of the fact is that we first need to actually learn something about physics and world before we know falling rocks are dangerous to us. And in any case, any example where you know you are consciously avoiding danger would apply there)

After you’ve read it, note that such semantical learning is likely to happen at a much lower level than we intuitively think. When we try to tackle any new task, we use our conscious abstraction level at first, but eventually the task falls down below our conscious level (and we perform it much more effectively). This is true for walking, talking, seeing…

Babies most likely to see the world upside down at first - if it is fair to say that their brain deciphers visual information from the eyes as “visual information” at all. There is nothing to base this strange stream of information on as no worldview has been built yet, and our brain is forced to build such a “circle of beliefs” that seems to make sense to base our learning on. One of such assumptions that “just seems to make sense” is how we see the world upright even though it is upside down on our retina. It just falls better in line with information from other senses that way. If you wear glasses with mirrors so that you see everything upside down, then after a couple of weeks you will see upright again. You have become so used to this task of inverting the image in your mind, that you don’t even perform it consciously anymore. (And when you remove the glasses, say hello to upside-down world again)

So, we are learning machines, and our brain just has drawn a logical assumption that “I” exist. We have come to possess a semantical concept of “existence” and “Self”. The concept of “self” is just a token in our worldview, and our worldview is what we use as the base to interpretate our sensory information.

The learning process itself IS EXPLICIT, but the worldview that gets built is COMPLETELY subject to our experiences. We are different people because we have had different experience of life. We feel in control of ourself and our thoughts as we have come to interpretate the world as something that happens to us, but strictly speaking, we are slaves of our knowledgebase(/worldview) as we make our decisions. Even the decisions to prolong a decision is something we make according to what we have learned and what we assume will work.

What we are, is basically our memories, or our worldview, as was mentioned in this thread. If our worldview was reset and built again from scratch we would not be the same person anymore in any real sense of the word. If our worldview was replaced by that of someone elses, our body would basically become to host a copy of this other person. Our own self would just disappear, or die if you will.

I am struggling to keep this short, but the matter of the fact is that this idea is so hugely interconnected to almost everything I see around me. It explains all the phenomena of the mind I can think of (split personality disorders, autistic savants…), and if you ask me, the “hard problem” is sufficiently solved.

I think that should get you started in seeing a different view of what we are than an unexplained sub-atomic layer.

What about all that duplication and killing stuff then?

Well, it is true that we are just our memories, but this does not mean I’d be happy to get duplicated and have “the original” killed.

If I agreed on that, I should also agree in a scenario where I get duplicated first, and then get killed the next day. “Hi, we duplicated you yesterday. Here’s another you. We will give this other guy a million dollars if you let us kill you.”

It doesn’t change this scenario if the killing and duplication occurs at the same time, or even if the killing happens first. If I get hit by a bus, and then my backup copy is awaken back at home, I, as an original learning machine, do not gain the experience of shifting into a brand new body. Even if my brainstate from the moment of death was restored, the original learning machine would not be affected.

I wouldn’t actually mind if I was being copied in this manner, but I would not choose to get hit by a bus just to get home faster either. The physical brain that is smothered all over the pavement is not gonna have an experience of getting home.

In other words, as a learning machine/process, find it important to actually have that experience of “shifting” onto someone elses “hardware” before I’d agree on the killing stuff. The original lacks that experience. The original is a completely different learning machine from the copy. The copy is not eligible to decide the faith of the original.

And furthermore, the mitosis example reveals something more about just WHAT we are. Suppose evolution had given us the capability of splitting into two just like that, then at the moment of splitting you would simply become two different learning machines. Both of them are you, a new you. No information would pass between them anymore, but killing either one would be just as sad as killing any person on this planet.

Neither one would see the splitting event as if someone got killed or disappeared. They would find it perfectly natural to split a worldview into two copies, and go to their own ways and learn new things in life. If they were to return back to one later, it would simply involve merging their worldviews back into one (resolving potentially conflicting ideas). You’d find it natural to explain to your friends “I was at the caribea as the Left-one, and in the antarctica as the Right-one. Turns out we both ate lobster at the same time, what are the odds?!”

(I’m not sure if the above is as plain to see for you as it is for me after I’ve dealt with my idea of consciousness for as long as I have… In any case, the only indication of my idea being “the truth” is, as always, it’s lack of paradoxical problems)

Although I must add, that the closest thing to a paradoxical problem arises when you consider whether it is possible to simulate mind by simulating the physical matter of a brain with discrete timesteps. In a completely virtual environment the rate of timesteps doesn’t make any difference either, we could update the system every 10 minutes if we wanted to.

The simulation doesn’t flow forwards like physical matter, but if the simulation is accurate enough, such a process/behaviour should arise that would basically insist it is experiencing qualia, just like we insist on it. Its behaviour would be the same as its real physical counterpart, and it would be capable of arguing at the internet to absolutely no end about whether it experiences qualia or not. And if it knew it was functioning through discrete timesteps, it might become convinced on this ground, that his qualia must be, in fact, an illusion. You might too.

Well, I have a lot to say about this, but I’ve been going on for too long already… :slight_smile:

Welcome, Anssi, and many thanks for putting in the time and effort for that summary. For what it’s worth, I agree with your entire post (although Tibby mightn’t believe so!).

Of course I’d have grave reservations about stepping into the device or transporter or whatever. After all, it could malfunction and leave me nonexistent, or with an inconvenient identical twin (in either booth) who owns half my house, or with the memories and predilictions of whoever used it last.

But ultimately, if I could be convinced that it really would feel to me merely as “paradoxical” as passing out and waking up wherever someone carried me, I’d take that million dollars (roughly five million Maarka, I believe :)).

Welcome to the Straight Dope boards, Anssi. I’m glad you finally got here to even the teams up :wink: . Actually, I believe that all four of us are in agreement with more than 95% of this consciousness theory business (although SentientMeat mightn’t believe so!). Our primary point of departure concerned one small detail: staking your life to live on in your duplicate. I stated my position as the original POV not flowing into the duplicate. I think this is synonymous with your saying the original would not have the experience of “shifting” onto someone else’s “hardware”. I went down the road of atoms not really to explain a mechanism of action for consciousness, but only to find some difference between an original brain (expressed temporally) and a duplicate brain. That road was too long and admittedly, perhaps the wrong road to take (Mangetout and SentientMeat made me do it!). :smiley: And, take no offence about that 13yo reform school dropout crack…just a joke…I even forget who said it…(it was Mangetout).

Your post was most illuminating- it makes sense, and I too agree with it (I plan to read your linked long version soon) . Although, I’m going to have to chew on that split/rejoin scenario you mentioned a little longer. My inclination is to think that the original would not feel shifted into the copies (in the same way that you describe the original not feeling shifted into a non-mitotic duplicate). I would not agree to undergo mitosis because I feel that it would entail my “death” (not feeling shifted forward), even though two valid copies of me were the result. If I were one of the copies, I would not agree to be rejoined because I feel that too would entail my death (no shift), even though a merged “me” would result. Net result: 4 identical people, 3 brain deaths, 0 dead bodies, 0 shifts. The re-merged copy would still rightfully be able to say, “I was at the Caribbean as the Left-one, and in the Antarctica as the Right-one. Turns out we both ate lobster at the same time, what are the odds?!”…but he would say it as a non-shifted duplicate. Is this the wrong way to view that scenario?

One more question for today: I agree that no paradox exists in your interpretation of consciousness and duplicates not shifting.
…but…
Would it be a paradox if the original did experience a shift into the duplicate? My feeling is “yes”. If the original shifted into 1 duplicate it must also shift into multiple duplicates (as well as experiencing shift in the original brain over time). The only was that I can interpret multiple shifts would be to invoke an exchange of information/networked minds, and to me that does not seem logical. Am I interpreting this incorrectly?
Anyway, welcome aboard and we hope you will stay afloat with us a while.

It’s important to note that the copy, fully believing itself to be you (and for exactly the same reasons that you believe yourself to be you… in fact, I’ll call it you as the copy… might, on exiting the duplicator booth, be screaming “no, I’m the original, I was transferred over here! I really was! That thing over there isn’t me any more, it’s just pretending to be me!”

Yeah, something like that, although we have moved to euros years ago :wink:

Anyway, if someone asks you if it’s OK to duplicate you and kill the original, I suggest you demand them to do all the duplication stuff first, and only then give them your answer about whether to kill you (the original) or not :wink:

I suppose the problem lies in the fact that we don’t think of ourselves strictly as learning machines in everyday thinking, and it is pretty cumbersome to really twist our brains into that kind of thinking. Even if we know how we experience life consciously while we don’t exist as anything but a token for our brain processes, it might not be trivial to imagine all this shifting in bodies stuff because our intuitive interpetation of the world makes us feel so much as ourself. I sometimes call it the case of “placing unwarranted importance on self”.

Basically what we do agree on, is that if you are scanned & duplicated, you, as the original learning machine, don’t feel any different. You might not even know a duplication has taken place, even though the duplication is absolutely convinced he is you, like Magnetout mentioned. (And it really would be just as wrong to kill the duplicate as it would be to kill the original)

The mitosis example has one slight difference which beautifully demonstrates us something more about ourselves. The important difference is that there actually occurs a physical event (splitting) which enters into the history of BOTH copies. They both remember the splitting event having taken place.

It is impossible to draw a line that would correctly show where the original went in the case of mitosis, or whether “the original would feel shifted”; that is where the everyday thinking breaks. It is merely mystifying if you even think about yourself as something “original” shifting from the past into the future, or if you think about the mitosis event in the sense of “where do I go in if such even took place”. Our semantical concept of “I” is not sufficient to deal with that.

Because you are just a “semantical interpetation of a learning machine”, there’s nothing to really “go anywhere through time” in any case. Every moment you think about something consciously is your own interpretation of an event as such that happened to you, yet it cannot be said that anything “essentially you” ever existed. You just learn new things and remember your history.

That should reveal us how the mitosis would work. Basically right after the split, both halves would start interpetating the world on their own, and they both would remember the splitting event having taken place. They both would feel that they have always been “you”, and that another person had just jumped out from them (and they would both know the other half feels this way too). And they are both just as right.

So essentially they are both “you” just as much as you are you while reading this, but since no information would pass between them anymore, they would not know what the other is sensing/learning anymore. Neither one would agree to get killed, obviously.

And if they returned back together through a physical event of merging into one, then the keypoint again is that they really do gain a physical experience of merging into one. If all the experiences of the split parts were merged into one, the resulting learning machine could actually remember his history as “having been split into two places and then having been merged into one”, not being able to recognize “which one” he was while his body was split. (Or alternatively the merging could be performed so that the two full worldviews would exist in the same physical brain as separate entities, which is basically sort of like a split personality disorder; the two different “persons” would be switching places, only one being active at a time :slight_smile:

Although the splits might hesitate to merge beforehand, since they might feel like perhaps they would lose control of themselves. They could have this thought only BEFORE the merging since AFTERWARDS the learning machine would feel like it had just been split for a while and had existed in both bodies.

In other words; It is likely that if we actually had such a splitting capability, our semantical interpetation of “what” we are would be very much different from what it is now. Whether these splits and mergings entail creations and destructions of different persons is a semantical issue. It can be seen that way, but then it is also correct to see every moment of your life as a destruction and a creation of a person.

I hope I don’t sound too self-contradicting up there; there’s just no good vocabulary to talk about these things :slight_smile:

Heh, oh this reminds me, I may not be a 13 year old school dropout, but I am not actually a professional scientist either, as you speculated. I suppose I’d call myself more of a hobbyist philosopher :wink:

Yes.

I assume “duplicate” here means the duplicate in such case as the teleportation example where the source pod fails.

Of course the problem is, that whether or not such speculative “shifting of original” took place, the duplicate would report having been shifted in any case. We would need to observe something happening to the original body; in this case I suppose it would need to drop dead before we really knew. If this happened, it would require us to revise quite a few things in our scientific view of the world :slight_smile:

Oh, and I should still mention, that I know my view of consciousness is simply too much for many people. They find it far too uncomfortable to think that perhaps they don’t exist as anything tangible at all. Or they find the consequence that an AI could be self-conscious just like we are to be “too much”. So they refuse such an idea and throw out an argument along the lines of “consciousness is a property of matter” or something like that, which supports their view of tangible self without actually explaining what is it in this mysterious property that causes consciousness then.

But this is somewhat expected reaction I guess, so that’s just fine.

I liked the schematics that SM posted earlier, so I thought that I would have a go with my own. I wish that I posted them earlier, since it’s easier to diagram this argument than put it into words. (although doing the diagrams in HTML was a little bit of an aggravation). This is how I view this whole business of POV in the various experiments we’ve discussed.
+
+
KEY to SCHEMATICS:
Similar Numerals: A being with a unique POV. Feels the experience of being shifted forward anywhere along the repeating chain.
Dissimilar Numerals = unique POV. Different person.
Bifurcation Points: Points of birth and/or death.
*Bifurcation Birth: The duplicate will feel shifted from before the point of bifurcation (numerals to the left). He will be convinced that he is the person (s) to the left, and in a sense, he really is.
*Bifurcation Death: The numerical chain ends. No shifting forward.
+
+
You should not agree to any scenario that leads to your numerical chain ending because it will kill you.
+
No paradox exists in these models (and, I believe they are time-invariant).
+
I believe that replacing all the digits in the schematics with the number “1” equates to SM’s and MT’s position. In my opinion, that constitutes a paradox.


Mitosis:
......................…...2
........................2
.....................2
1 1 1 1 1 1 1 1 1 1
.....................3
........................3
.....................…....3



Duplication:
......................…...2
........................2
.....................2
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 
.....................3
........................3
.....................…....3



Mitosis > Merge
......................…...2..2
.......................2……….....2
.....................2…………........…2
1 1 1 1 1 1 1 1 1 1………………………......…..4 4 4 4 4 4  
.....................3……….….......…3
.......................3……..…...3
.....................…....3..3

The above schematic illustrates how I conceptualize Anssi’s merging scenario. Re-converging adds and interesting element (although I can’t figure out the physical corpse count :smack: ). Duplicate #4 would have total recall and feel shifted from #1,2 and 3. Cool.


Transporter A
1111111111>>>>2222222222
+
Transporter B
1111111111>>>>1111111111



The transporter model (shown above) does add an interesting dimension to the thought experiment. Not being a Star Trekker fanatic, I’m not sure how the machine is actually meant to work. If it disassembles the person in the departing pod, then reassembles in the arriving pod with new particles (view Transporter-A), then #1 dies, and is not shifted into #2 (although #2 feels shifted from #1). If #1 shifted forward into #2, this would be a paradox.
However…
If the particles are disassembled in the departure pod and those same particles are transported into the arrival pod, such that the reassembly takes place with the exact same particles in the exact same configuration, perhaps the person in the departing pod will not die – he may shift (view Transporter B). In this situation, only one person will be around at any one time, therefore no paradox will exist (no multiple POV. So, theoretically, the departing person may not have to die, but must he necessarily live? My guess is no, however it depends on the level at which disassembly takes place. Two extremes:

  1. Disassemble down to the level of fundamental particles.
  2. Place the person on ice, cut him in half, carry the pieces to the arrival pod, join the halves together using nano-surgical techniques.
    I believe that #1 is too traumatic an event for the original consciousness (POV) to survive, but #2 will result in survival (shift forward). So, between these two extremes, perhaps a viable transporter model exists that does not kill (i.e. disassembling the biological cells and vacuum/suction them to the arrival pod). The problem: I don’t believe there is any possible way to tell if the transporter was a safe and fun mode of transportation, or a veritable killing machine. The guy in the arrival pod will always say it was safe, but the guy in the departing pod may soon be a little too dead to tell you the truth. Any volunteers?

 
1 2 3 4 5 6 7 8 9 (…Brain Death)


My re-interpretation of the changing brain problem:
It was brought to my attention by SM and MT that you are a different person today than you will be at any point in the future, or were in the past (due to the impermanent nature of particles in life). Therefore, if a person feels the experience of being shifted in his own brain over time, then he should also feel the experience of being shifted into his duplicate (s). In an extreme example, lets say that all of the original atoms in your brain get replace in 24 hours. Their argument (paraphrased) is: The original brain in 24 hours is as different to the original brain today as a duplicate brain is to the original brain, ergo, what’s good for the goose is good for the gander. Since we know that we feel the experience of shifting forward in our own brains over time, we must also feel shifted into our duplicate, unless a valid difference exists between the two. Believing that shifting forward into a duplicate constituted a paradox, a valid difference must exist to resolve the paradox. I satisfied myself (but no one else) that there was enough permanence (cellularly and atomically) in the brain over a lifetime to constitute a valid difference to the duplicate brain, (In other words: a particular physical link exists between a brain over time that does not exist between a brain and it’s duplicate. That link is broken at the point of bifurcation resulting in birth and/or death). I still think this is a valid difference, but I now believe that we don’t have to look even that far to note a valid difference.(view schematic above). At first, I thought that I had to find permanence between a brain from point #1 to point #9 in order to be assured a valid physical continuum necessary to shift consciousness forward. Now, I believe that it is necessary only to find permanence between #1-#2, #2-#3…etc. The increments between numerals can be as small or smaller than a nanosecond. It doesn’t really matter what the time increment is, you just have to adjust the number of digits accordingly, string enough nanoseconds together and eventually you will equal a lifetime. The link between #1-#2, #2-#3… is greater than any link between the original brain and the duplicate.
If I have misrepresented anyone’s assertions, let me know. If you disagree with any or all of the above, let’s discuss it. If you think of some more mind-boggling offshoots or side arguments, post them.
Mr. Tibbs

I disagree with your basic - as yet unssupported - assumptions and assertions*. Your drafted-in expert guest (who is to be welcomed - nice to meet you Anssi!) doesn’t appear to be singing from your hymn-sheet.

*That there is any difference at all between being Mangetout and being someone who is utterly convinced he’s Mangetout
*That the original arrangement of original atoms matters more than an identical arrangement of identical atoms.
*That personality/identity somehow becomes mystically embossed upon certain specific atoms.
*That there is any paradox here at all; perhaps you should look up the term.
*That emotional judgments of what experiment might be personally prepared to try are in any way objectively useful.

I’m not sure I can be bothered to argue with you any more.