If a person was cryo-frozen and then reanimated...

I’ve said no such thing. I am not talking about souls.
What I’m trying to describe to you is the difference between qualitative and numerical identity. Basically the difference between two entities possessing the same attributes and two entities actually being one and the same (if you have any programming background, it’s similar to the difference between reference and value equality).

I think your position here is a common one, and it is that if you could duplicate my brain exactly, that other person is me, such that if I died and my entire brain had been copied elsewhere it’s meaningful to talk of the same “me” waking up in the other body.

This position doesn’t stand up well to philosophical examination though because actually when people talk about “me” they mean a specific instance of consciousness. It’s just that in our current world, this “instance of consciousness”, Mijin, and this particular brain, all mean the same thing.

But the distinction between these things is important in a hypothetical reality where we can duplicate minds.

If I were to make a copy of you but keep you alive, we would not expect the consciousnesses to be linked; if I stick a pin in one entity, the other will not feel pain. There is no more association between the two entities than there is between me and Barack Obama.
When I die, I don’t expect to wake up as Obama, and similarly I would not expect to wake up as my clone. It’s irrelevant that it is identical to me: there are two instances of consciousness here.

And taking up the point about divergence; at what point do me and my clone diverge? If I am kept alive for 1 second does that now preclude me from waking up in the other body?

Another example, what if I make an imperfect copy of you. Is that you? Does your consciousness partially continue, and what does that actually mean?

The issue here is not about which is the “real” one.
e.g. In a hypothetical where there are 10 clones of Mijin, I might be Clone7. In which case, that is me. I don’t care that I am not the original Mijin.

This is cross-purposed. Can you give us a definition of “me” as you used in these setences:

And if my position is “common”, perhaps it is because it is correct. It’s not clear to me that any particular “philosophical examination” has demonstrated otherwise.

I’ve said it several times: a particular instance of consciousness.

I mean common among the general public, who in most cases are not aware of the philosophical arguments. Not common among philosophers.

So do you want to have a stab at responding to some of the points raised?

The only thing in the brain is a bunch of switches.
No memory there, no thoughts, no nothing.
It is possible to know yourself and where you are at,
but it is not in the brain.

Which is what, exactly, if not the result of a particular (if highly complex) network of brain cells?

Well, good for them, but let them distill and present a philosophical argument that has some basis in reality.

Which particular ones? One or two post numbers would help.

I agree. And that’s why it’s irrelevant whether someone makes a duplicate of my brain. I am the result of this particular network of brain cells.

They are not going to come to you, but you should care about whether your opinion is built on ignorance or not.
I recommend you start with the wiki page on personal identity for a brief summary of some of the main points.

Post #21. However, I recommend reading the wiki first so that you are aware of some of the concepts.

Well, I’ve gone over post 21 and the wiki page and as far as I can tell, they assume that which they are trying to prove, i.e. the existence of a “me” which can be independently defined. I don’t think my further participation will be productive.

They don’t know that they don’t know.

Well, it isn’t true that the positions mentioned on that wiki, or my position, assume (or conclude) an immaterial soul.

It’s a fascinating topic, and it’s a shame that you are unwilling to engage in any of the hypotheticals and challenge the model you have of what the mind is.

To summarize briefly again: when we snapshot a computer state and duplicate that state elsewhere, it really doesn’t matter whether the new state is numerically or merely qualitatively identical to the original. All that matters is whether it behaves the same to the external world.

With minds however, since they have subjective states, it matters. Whether a copy of Mijin is one and the same Mijin, or merely qualitatively identical, is literally the difference between life and death for me.

I don’t claim to have the answers on this, and I’m certainly not trying to claims there are souls. Indeed, I have a postgrad degree in neuroscience and have worked for many years in AI: I know brains are machines, and I see nothing magical about what they do (though I am happy to concede we have no model of phenomena such as qualia yet).

If we consider consciousness as an emergent property of the cortical brain that evolved a while back on the biological tree of life, toward the trunk, then I believe self-awareness/personal identity (PI) evolved as a higher order mental process on top of the lower order consciousness more recently on that tree…pretty close to the fruit. The octopus was the first creature to evolve a personal identity. I have no scientific data to back up that claim, but octopuses seem pretty self-assured; oysters?..not so much. But, I digress.

Self-awareness is awareness of awareness. It has a supervienient relationship to consciousness, like a cherry atop a hot fudge sundae, whereby the cherry is the “YOU” who blushes when you fart accidentally in front of Queen Elizabeth II, and the hot fudge sundae is the zombie “you” who could not care less what you do in front of anyone. Remove your cherry and you’re a real boob.

Furthermore, I propose that lower order “consciousness” is nonlocal, while higher order PI is local. By this, I mean, while there may be multiple, identical “you’s” spread throughout the universe, each indistinguishable from the other “you’s”, in the eyes of any non-”you” observer; there may be only one “YOU”, and only “YOU” know that “YOU” are unique. Of course, all the other “you’s” can make that same claim…so, “YOU’re” not all that special—get over “YOUR”self, boob.

YOU are unique in the universe? How can this be? It flies in the face of the materialist mindset. Let’s just remember that materialists are generally hypersensitive panty-wipes that suckle their mother’s teets longer than is psychologically healthy. Let’s instead proceed down the road of common sense. There is only one you; there can be only one you; you are unique. You may be an asshole, but you’re a special asshole.

Let’s illustrate this with computers. I’m not a computer geek, so bear with me. Let’s say Dell achieves strong AI with it’s AI Inspiron line—not just computers with consciousness, but self-aware to boot (heh). Line up 5 laptops in a row: 2 identical Inspiron AI-1000s, 1 economy Inspiron AI-100, 1 gamer Alienware AI-2000 and 1 Inspiron AI-1000 that was momentarily dropped in the toilet (we’ve all done this with our i-phones, yes?). All 5 computers are loaded with the same OS (Windows 33.1) and software (Adobe Human 1.0) and share the same input devices (video-cam, microphone, touch pad, Taste-eBuds, Smell-eReceptors, etc.). Turn 4 of them on, except for the second Inspiron AI-1000, and let life unfold before them for 10 years. At year 10, clone one of the active AI hard-drives with Norton Ghost onto the dormant AI-1000 drive, then boot it up too. Then wait another 10 years. Let’s call them Timmy.

I would expect 2 normal Timmies (one just a little late to the party), one mentally challenged Timmy, one gifted Timmy and one insane Timmy. But, for all intents and purposes, they would all be valid Timmies.

Scenario 1: At year 20, a beautiful, buxom, scantily-clad lab assistant asks the laptops, “I have to destroy all but one of you, do you care which one I choose to live?”

Scenario 2: At year 20, a beautiful, buxom, scantily-clad lab assistant dis-attaches each laptop from it’s shared input devices, then attaches separate input devices to each, in close proximity to their chassis. She does this while they are in hibernation mode. She then asks each laptop, “I have to destroy all but one of you, do you care which one I choose to live?”

I have no point to this thought experiment, I just thought it would be fun to think about…
…Alright, I do have a point: In scenario 1, does the question even make sense to the laptops with shared input devices? With identical memories and shared perceptions, can 1 laptop identify itself from the others? My guess is that they would all say, “it doesn’t matter.” In scenario 2, with new-found unique and separate input devices, the game changes. My guess is that each one of them would now answer, “choose me.”

Different answers from the same group of consciousnesses, the only difference being that in scenario 2, they were given separate input devices at the last moment.

Now, after the fact, ask both remaining laptops in each scenario if the beautiful, buxom, scantily-clad lab assistant chose the right one to keep alive. They will both, certainly, answer, “yes, I’m glad she chose me.”…(unless, possibly, they are hetero-females, or gay males).

How can this jibe with the materialist/physicalist view of the universe? Two or more groups of identical elemental particles in identical arrays should produce identical results, correct? From toasters to human beings, replicate the the elemental particles exactly, and the replicants are indistinguishable to any outside observer in any way. This must apply to self-awareness too, right? Yes. But are they really the same? Are they the same to the individuals involved?

No. I don’t think so. You can’t literally be in two places at the same time.

Observe two functional brains before you. They are identical in every physical way. To you they are the same and it doesn’t matter which one you choose to be your fishing buddy or your wife…or both. But, to each of them they are different. How?

They are identical, but they are not the same. This one <look to your left> is over here; that one <look to your right> is over there. They occupy separate coordinates in space. They are not exactly the same. I believe lower order consciousness is nonlocal and is inherent to the software, no matter how many copies are made or where it is booted up. I believe higher order self-awareness is unique and is tied to one and only one continuous current of electro-chemical current. It’s born when booted up for the first time. It sleeps in hibernation mode, but remains intact. Changes and degeneration can occur over time, but as long as the original current continues unabated, the self remains intact. It dies when re-booted and a remarkably accurate impostor is born to take it’s place.
Far be it for me to tell you how to live your lives, but, heed my warning: don’t get in a Star Trek Transporter, it will kill you.

As far as I can tell, self-awareness is essentially an affect arising out of the survival instinct. Because of that, our sense of self is directly and absolutely tied to our physical presence. There is no self that can be transferred from one person to another because self is literally physical. Your sense of you is a “what” rephrased as “who”.

Personality is a combination of physiology (e.g., I am pretty good at math, I like to chew aspirin, I get off on – well, nevermind about that) and experience. In a very real way, I am not the same person that received my HS diploma on the graduation stage because of the things I have learned and the things that have affected me in the intervening years. But this self is still the same self, and after 600 years of cryosleep on the trip over from Golgafrincham, this will still be the same self, even if some of those other aspects change or some of those memories are lost, because this self is this physical being.

I tend to believe that the self-awareness thing just might not be programmable. It is directly tied to the struggle for survival, you might be able to effect an incredibly convincing simulation of it in a machine, but it is not quite obvious that it would be more than just a simulation.

If we were to build a computer which has true artificial intelligence, a thinking being inside a computer, then copy that computer and all its internal states elsewhere, the personality of that thinking being would exist in two places at once. All that matters is whether it behaves the same to the external world.

Of course the two copies would not share experiences, so they would start to diverge immediately.

Why is a qualitatively identical Mijin different from an actually identical one? Is the concept ‘me’ truly indivisible?

If and when we start building truly intelligent computers, the programs and hardware might be in great demand. Is there a reason why we should not ‘clone’ any particularly competent AI as many times as convenient, and what do you think the AI will experience subjectively in that case? If computer-based artificially intelligent entities can be easily cloned, uploaded and downloaded, and transmitted around the world (or around the Solar System) we would be creating multiple instances of the same personality. Does each new ‘me’ come into existence with no previous life experience, or is it rather just another me in another place?

The main problem I can see with the cloning of personalities in this way is that it reduces the psychological diversity of the population. If there are lots of mind-cloned AIs, or lots of mind-cloned Mijins, whay would that mean for diversity and democracy? A political party which created its own supporters might soon have a following of literally like-minded people.

This is an interesting point of view, but when the ‘impostor’ wakes up with exactly the same memories and patterns of behaviour as before, there is no reason at all to call it any different to the original.

In a society which includes some sort of technology that is equivalent to a Star Trek transporter, there might be two distinct classes of people; those who think that transporting kills you, and those who do not. The people who do not think they are killed by transporting would have a lot more opportunities for travel, at much faster speeds.

Fear of transporters would be (at least as) limiting as a fear of flying is today.

First of all, let me say again that there is a distinction between a computer and a machine.
The brain is undoubtedly a machine, but whether it is a computer, and whether Strong AI is possible, is a deeply disputed topic.

Secondly, even if strong AI is possible, I would not assume that a duplicate computer state is the same entity as the original. As I said in my previous post, at this time since computers don’t have subjective states we don’t care whether they are qualitatively or numerically identical. That’s not the same thing as knowing that they are numerically identical.

Qualitatively identical is identical. It’s just not the same entity.

Let me put it this way. I make a copy of the Mona Lisa. It’s identical to the original. But is it the actual Mona Lisa? No.
That’s not to imply the Mona Lisa has a “soul”, or that the copy is any way inferior to the original. It’s just to say this picture is this picture, and that picture is that picture.

There are lots of moral considerations if we could actually create subjective states.

Heck, I could write a program that experiences the physical pain of pulling teeth then run it millions of times.

I don’t think many people are aware of how incredible it is that brains can create such subjective states, and what a difference it would make to our world if we understood how they do this

You are right. It would be a different world if our machines had experiences, qualia and opinions of their own. Imagine getting into an argument with the TV remote about which channel to watch.

I’m pretty sure this will happen, sooner or later - but hopefully later.

Going back to the OP;

You might be interested in this page, written by a ‘cryonicist’, about the on-going war between ‘cryonicists’ and the cryobiologists.
COLD WAR: The Conflict Between Cryonicists and Cryobiologists
Most cryobiologists doubt that we’ll be able to freeze humans and revive them, at least not until our freezing technology is at least a hundred years (or so) more advanced.

In the meantime, a person frozen today might be thawed at some time in the future, and with fantastically advanced extrapolation technology some small fraction of their personality might be salvageable. Is that a desirable outcome? Would you be content with surviving with 5% of your personality intact?

Except to the original. If you met your replicant would you say, “hello you”, or “hello me”?

I don’t believe anyone would actually travel on the transporter, however. You have the killing departure pod and the birthing arrival pod. When I come out of the arrival pod I’d live only till I go on my next trip…to hell (yeah, I knew I was going to catch heat from looking up Becky’s skirt in 5th grade). Making matters worse, I’d look up to see my look-alike boinking my wife. :mad:

In a society that has access to copying and teleportation technology, those individuals who do not think that they will die if they teleport or copy themselves will soon outnumber those who do. Think of it as evolution in action.


However it is unreasonable to expect any real system to be 100% accurate; the copies will have errors, so each time you copy or teleport yourself you will introduce random errors into your new incarnation(s). The question is - how much error are you prepared to allow in any new version of yourself? No more than 0.5%, or no more than 95%?

Going back (one again) to the original post - how much random damage are you prepared to put up with if you freeze your brain? Of course, if you prefer to avoid cryonics you can still damage your personality in random ways using alcohol or other drugs, or by other deleterious behaviours (such as getting old).

Maybe it is better to mess up your mind while you are awake, having fun, instead of doing it all at once in a freezer or a faulty teleport machine.

But that is only if your teleporter is the synthesizer type. If you are using a space-folder, there becomes here and you are never deconstructed or reassembled. Not sure how well a folder would work across the surface of a planet with the transverse gravitational gradient, but if you are just crossing the planet, I would tend to favor the Puppeteers’ “stepping stones” described in Ringworld Engineers.

And anyway, day-to-day life introduces errors, not sure teleportation would be that big a deal.

Well, as I made the point upthread, the whole concept of copying errors illustrates the problem with the notion that the other person is you.

Presumably, you do not expect when you die to wake up as some other random human, say, Prince William. So, if our copying machine made so many errors in duplicating your brain that it instead made Prince William’s brain, for consistency’s sake, that cannot be you at all, you did not survive that copying process.

But there is a problem now, because a line must be drawn between a level of error at which where I survive the process but in some modified form, and the level at which the new brain is a new person.
And note that it is no good to say that it is a sliding scale: the level of “Mijin-ness” is something that can have a sliding scale. But whether I am experiencing something, or do not exist at all, is binary.

I think it’s simpler to say that error or no error, it’s simply a new person.