BMIs, Replicants, and the Individual

Hi Dopers! Long time lurker here finally deciding to add something (hopefully something interesting) to this community.

I imagine that sometime in the (near) future, people will begin to get more BMIs (Brain Machine Interfaces). I also imagine that these BMIs will eventually become seamlessly integrated into persons. In addition, I believe (and wish you to take it as a given for this argument) that computationally, we will eventually understand exactly how the mind works.

Now imagine that one can make a machine version of me; since we completely understand how the mind works, we can replicate with machines a being that is in effect, me. But in ways its not; I still care if the biological me dies even if the machine version still exists, for in my view, the real me, the me writing this, would no longer exist.

But what if instead, we just replaced a piece of my brain matter with a BMI, and integrated that hunk of the brain matter with a larger machine that was replicated me. I imagine I would be the me that is still largely biological. But what if we cut me in half, and attached each half to the machine version of me. Which would be the real me?

I guess the real question is: what part of me – in the brain, no soul stuff etc. - makes the me I experience as an individual? How would it – if at all – be degraded by trying to splice parts of me into replicated versions of me?

There isn’t an answer to this question (I don’t think) which is why I placed this in Great Debates. Thanks in advance! :slight_smile:

I would say the you that typed that post is an illusion created each day when you wake. In the first long second of consciousness, there is a moment when a blank you doesn’t know who or where it is. Memories quickly flush in and give you the impression that the being known as A.Selene has continuity of existence.

When in fact every night when you go to sleep you’re obliterated and filed away, and your body conjures up a new version in the morning.

If that’s the case then either body is the real you. As real anyway as the A.Selene one second after waking up. That said, it’s understandable to have a *nostalgic *love for your body, since after all it is the machine you formed all those memories in.

As far as we know, consciousness is a product of the processes in our brains. If you somehow duplicate the exact state of your brain (which might well be physically or technically impossible), you duplicate yourself. From a scientific standpoint there is no meta-you, no soul or whatever. Every copy would be you. They would probably immediately diverge as they make different experiences, though.

[hijack]

I foresee that giving rise to major religious schisms.

[/hijack]

^^This.

Although it is important to note that if you copied your mind into something else each entity, while essentially “you”, is distinct.

For instance if I copied my consciousness into a robot (or replicant) I would still be very concerned about my continued existence. In no way would I all of a sudden be cool with getting killed because another “me” existed. The other me is now a separate consciousness (as noted above experiences would begin to diverge immediately and become more and more distinct as time passes). A loose example would be if you had an identical twin brother/sister would you consider them “you”? The analogy is not perfect but I think the essence of the notion is there that you would not consider the twin “you” no matter how closely it resembles you.

And yeah, as also noted above, there would be profound religious questions to all of this.

No system can ever fully understand itself. It’s just barely plausible that we’ll someday be able to understand the full workings of a dog’s brain, or that a suitably augmented human will be able to understand how an unaugmented human works. But even then, the augmented human would still not be able to understand the full workings of an equally-augmented human mind.

Elaborate, please?

How big of a picture are you thinking of here? In our lifetimes? Very doubtful. Ditto our grandkids’ lifetimes, probably. But say we (as a species) survive another 250,000 years. Anything’s possible.

Even if that were true, one person doesn’t have to understand the whole thing. Neurologists could understand all the processes but not the details, just the way they enmesh. Specialists would know one section or other with fine detail.

No one human knows every street, alley and cull de sac on Manhattan, but we have maps of it. And given time we could easily construct it on a computer.

I’m not convinced that there even is any way to break up a mind into mostly-discrete subsystems. But to truly understand a mind is to know how it will respond to any given input, which would require all of the subsystems (if there be such) and their interactions. And if you have any system that can reach that level of understanding of itself, you pretty quickly run into paradoces.

What paradoxes would those be? Admittedly I am not all that educated in these matters, but the only argument I have ever heard was that of an optimism for understanding the brain: it might take years - probably well past any of our lifetimes - but I fail to see why we won’t eventually understand the human brain as well as any other system.

I agree with Chronos. The human mind is a unified whole. When we are conscious, we experience one stream of consciousness, which defines our mind. The mind cannot be separated into distinct parts. For example, some epileptics have half their brain removed by surgery, yet they certainly don’t have half their mind removed. Their entire mind is still there. And then there’s the case of the French guy with no brain who was still fully capable of normal mental function. You might also looked into research by neuroscientists such as Wilder Penfield, who have provided evidence of a second level of mental existence beyond the physical.

Balls they do.

If it’s not physical, they’re not measuring it. If they’re not measuring it, it’s not evidence.

Bolding mine – I have to assume that you’re alluding to issues of logical self-reference (e.g., Russell’s barber, Godel’s incompleteness, etc.). But, I’d point out that “understanding a mind” is not equivalent to “a mind understanding itself”.

Assuming, of course, that a “mind” is, or can be represented by, a formal system.