Mind transfer question: preservation of personal identity vs dichotomy between versions

I’m sure other people have thought of this dilemma before, but it is the unique abilities of the planarian flatworm and the mind uploading concept that got me musing about this particular personal identity problem.

The planarian flatworm is an exceptional living being in that if you break it into smithereens, each tiny bit will be able to reform the entire body of the original planarian flatworm. Smithereens can be as little as 1/300th of the planarian flatworm’s initial body size. A research published in the July 2013 issue of the Journal of Experimental Biology shows that once a new planarian flatworm is reformed and grows a new head, the animal will be able to restore all of the old memories of the original planarian flatworm.

Mind transfer refers to the process of scanning a brain’s mental state and copying it into a computer. Once the transfer has been completed, the computer will contain a conscious mind that can show the same long-term memory and self as those of the initial bearer and can function in the same way as the original brain.

My question is: What happens if the same mind is uploaded into many different machines? I mean, in the long run, with time. Will every computer bearing the same uploaded mind continue to show the personal identity of the original mind or will they each develop a new personal identity?

As far as I can tell, we won’t know the answer to this question until we try it.

Personally, I would expect them to wind up like identical twins in that they share the same set of base traits, but experience, both mental and physical, would change them to be separate entities. Just like if you were to take a copy of yourself and throw it into a torturous dungeon of horrors for a decade, they’ll probably come up a messed up individual, similar to yourself, but distinctly different.

On a related note, if you like sci-fi, this recent bestseller deals with this topic: We Are Legion (We Are Bob). Not hard sci-fi, but it was an amusing read.

Echoes of the Star Trek transporter debate…

I’ll say they’re all “really” Jack Jones. If the copies are identical…then the “person” or “identity” or “self” is identical also. If you can’t tell them apart, they’re “the same.”

Now, what’s fun is that they’ll diverge over time – until, after twenty years, they might hold some very serious differences of opinion. But the same might well be true for us singletons: isn’t it at least possible that, if you were to rewind the reel of your life, and live it over again, you’d have some different opinions than you do today?

If so…are you still “the same person?”

I enjoy sci-fi, indeed, but I’m not a heavy consumer anymore. “We are Legion” sounds like a fun piece of escapism, maybe (at least in part) especially because it is so fantastic that it looks miraculous.

To access this forum board, we operate computers, that is machines that are able to store and process information to create new information. These machines lack any personal identity or self-awareness. The software running on our computers is virtually the same and does not develop into something else over time.

Even if the processing speed and memory of computers increases in the future, they will remain machines able to store and process information to create new information, but nothing more - no personal identity and no self-awareness. I think these are features that stem from more than just what computers consist of. But I may be wrong and I’m curious how knowledgeable people envision mind uploading as a practical procedure, not as a science fiction cliché.

Man, you lost me at the nasty ‘worms’.

We (the collective forum “we”) have had similar conversations before and my position is always that “identity” and “self-awareness” are emergent properties of the right kind of structured complexity and can as easily exist in silicon and software as it can in the biological brain. Some disagree, although there’s little disagreement about the fundamental nature of these kinds of emergent properties, and the software-equivalence argument is a central precept in the computational theory of mind in cognitive science. The dissenters in this argument, like Hubert Dreyfus and John Searle, tend to have extremely limited and myopic views on the issue. Searle’s “Chinese room” argument seems to me a pretty trite fallacy, and Dreyfus was the guy who claimed decades ago that computers are inherently incapable of playing anything better than mediocre chess – until one of the early chess programs beat him decisively way back in the 60s.

Regarding the issue of dichotomy between versions, if I understand your question correctly, I think you have to distinguish between state, process, and environment – that is, between memory, brain function, and the entity’s ongoing experiences – to make a determination as to if and how much the different versions will diverge. The brain upload itself is just a snapshot. It’s like if I gave you the hard drive out of my computer, you could theoretically recreate essentially the same functional computer I have here, and depending on circumstances it might follow a similar evolutionary future path. But you would have to support that disk snapshot with the appropriate processing and environment, or you’d either have something non-equivalent or more likely nothing at all. Now, whether a computer with an uploaded image of a human brain could continue to evolve as a human is a more nuanced question, but I don’t see a fundamental reason why not, provided it was offered the equivalent senses and experiences, whether real or simulated.

Nasty? They’re miracles of nature. When I see a machine replicate what they can do, I’ll almost believe in mind uploading.

I can see two approaches to the issue. The skeptical approach doubts the possibility that a man-made machine can show traits that only living things manifest, such as will.(A computer chip’s rigidity prevents it from ever matching the versatility and complexity of a neuron.) The optimistic approach involves the fact that people will be able to manufacture sentient computers, where they can transfer all the information existing in a human being’s brain (once they have figured how to do it) but I wonder how that can work because it will be quite problematic (if not impossible) to upload a person’s mind on a computer already endowed with identity and self-awareness.

Yes, but there’s a twist here: a computer that is able to house someone’s personal identity and be aware of it must already have a personality of its own. How do you upload a human mind on a machine already endowed with self-awareness?

The SF novel Voyager in Night by C.J. Cherryh explored this concept way back in 1984.

Different virtual ‘copies’ of the same individual coexisting, ‘backups’ of consciousness at different points of time with different memories and experiences, etc.

It reminds me that (way) back in university a fellow student of mine was shocked when I told him the works of fiction he kept quoting from could not count as arguments for the existence of god. He kept ignoring the contradictions I discovered in his demonstration by quoting new fictional stories. One of the most fruitless conversations I have ever had.

Right, and where the skeptics’ argument goes off the rails is in making the unwarranted assumption that there is something fundamentally unique and magical in the most basic low-level switching elements instead of the fact that profound properties emerge from the way that billions of them are interconnected, and that the elements themselves don’t matter in any qualitative way because fundamentally all information processing is computational.

It’s not endowed with anything until it has the necessary information base and processing algorithms. Think of a fully functional adult human brain with absolutely no memories, no learning, no experience of anything. Just an absolute blank. What “identity” or “self-awareness” would it have? It would have nothing beyond low-level reflexive keep-alive functions like breathing.

?? I’m not totally sure I follow you here. Couldn’t the machine be wholly dedicated to the one task, without any extra processing power to operate a personality of its own? That was what I thought was being asked: we make a “brain emulating” machine and initialize it with data from someone’s actual brain/mind.

Now…there’s no reason your idea can’t be done also. It adds complexity, and it certainly makes me less willing to be dogmatic. I guess it all depends on how many tests the system passes. Do friends and family say, “Yes, that’s Jack, all right, don’t you think I know my own brother?” Or do they say, “I dunno, there’s something wrong, although I can’t put my finger on it…?”

If the system passes every test, at least it’s a lot harder for anyone to aver, emphatically, “No, that is not really Jack.” If all they have is a philosophical objection – well, great: we have a difference of opinion that is (essentially) faith-based. (Like the Star Trek transporter problem: it depends on people using language differently. One guy’s use of the word “identical” is not exactly identical to another guy’s use of the word!)

That is like saying that a computer that is able to run Grand Theft Auto must already have a copy of Grand Theft Auto inside it somewhere before you upload it. It doesn’t.

The different instances will start to diverge the instant they are downloaded. How could they possibly do anything else? They will have different experiences and encounter different environments. After a year or two they will each be entirely different people.

If your original question was a matter of wild fantasy and speculation - uploading minds to computers - then explorations of the question in fiction seem to be entirely appropriate. It’s not like we’re talking about actual facts or real possibilities.

I agree. After all, identical twins (usually) start with the same DNA and the same maternal environment, but eventually develop different personalities as a result of accumulating external environmental effects; the same would happen with duplicates, I expect.

One comment that is crucially important. Your brain already is actually just a legion of independent components working together, communicating between components at a couple thousand pulses a second. (this is glacially slow, digital electronics are much faster)

So this means it is possible, from an information sense, to have an invasive neural implant in your brain. That brain would be connected to a machine that emulates additional brain tissue.

So if the implant were surgically expanded in a series of hundreds of steps, it would be possible to convert your consciousness to a fully digital being without any “break” in the chain.

Basically, imagine you go in for surgery 1. 1% of your brain is destroyed in the process, and this very high bandwidth implant is installed.

You have a little trouble, missing 1% of your brain, but the implant is acting like a prothesis. It is connected to a cloud of computers. Soon, you relearn the functions that were destroyed in the surgery, and realistically are now more capable than you were originally. You go in for surgery 2. Same argument there.

So if you went through 100 individual surgeries, near the end your body is so damaged from all the trauma you’re actually wired in to life support systems permanently, and your body is probably dying away, but your mind is as capable as it ever was.

Realistically, being digital, you’d probably be more capable than you are now. I suspect your memories would be much clearer and more exact, and you’d of course think far faster. Especially once the final step is taken and you’re completely disconnected from your original organic self. I can’t imagine what it would feel like, but gaining the capability to do almost anything, go almost anywhere, understand anything, be more skilled at any task than any human on earth…

Note to the above : this hypothetical procedure would work better if the surgeons, who are of course using robotics, not human hands, were to carefully slice off pieces of your brain as each implant is installed. Those pieces are flash frozen, taken to a lab, and the actual neural connections are deciphered using automated scanning equipment. The prosthesis - the neural implant - is attached to a digital machine that is loaded with the pattern taken from that part of your brain. So when you wake up, only a tiny amount of information has been lost, and your thoughts flow in a similar way to how they did before, since the signals coming from the part where the implant is installed are roughly the same as they were before.

Quite surprisingly, this technology was demoed in rats over 10 years ago.

And you know this … how? We can deem certain things to be impossibilities based on our theoretical understanding, but this appears to be fundamentally a matter of information and its processing, nothing more, unless you’re proposing that the mind involves magic, souls, daemons, etc. Advocates of transhumanism like Ray Kurzweil believe it will be possible within decades, though he may be optimistic. Others think it will take longer but will likely be possible before the end of the century. What is it about uploading the mind that you believe is a scientifically supportable inconsistency with our understanding of physics?