That’s really the only reason I bother reading threads like this; to watch each side accuse the other of being the ones who believe in magic/the soul/god/whatever.
How about people like me. I don’t think the copy will be the same “me” that’s sitting here typing, but I’d undergo the procedure because I know the copy will enjoy ever-lasting(ish) life, because I know I’d enjoy it if it were possible to live forever.
It’s akin to people wanting to have kids so that “some part of them lives on”. In fact, I expect that will be the most likely outcome if this ever really happens: legally, the copy will be my child, and stands to inherit my estate, as any biological child would. This also explains why it would usually be done at the end of life. I wouldn’t want to give that brat half my stuff right now, but if I were near to death, yeah, why not? It all goes to taxes otherwise.
Consciences, personality, identity, self, call it what you will, is a chemical and slightly electrical phenomenon that does not translate into the digital world of IF>THEN. There could be and already are algorithms that can mimic human responses but they are not actual thought. Not actual Self. How would you be able to translate chemical action into digital, storable personality?
Even our memories may not be what actually happened. Each time a memory is thought it overwrites the actual memory one more time. That favorite green truck toy you had when you were 5 years old may not have been green at all. You have remembered it as green so many times since you were 5 that for all purposes it is now green, it may never have been. Unless we are postulating an organic, chemical, emotional, storage system, there is no way to store a human mind.
The storage and response systems are completely different.
The problem with “consciousness” is, how could you tell? Do the memories go with it?
Also, if you can do this, then, theoretically, you can make a copy of it; would both of “you” think you are the “real” you? (Never mind “Schizoid Man”; how about “Second Chances”? They’re both “the” William Riker.)
I figure this is in the same category of topics as self-driving cars. There are people working on it as we speak, and there are folks who will argue that it can or cannot work for some reason.
But the argument is kind of pointless. If it can’t possibly work, then no one will ever do it, no matter how much effort they put into it. But, if it can work, and we eventually figure it out, then the answers to all these “Is it me, or a copy?” type questions will probably become quite obvious.
This is the foundational premise for the Bobiverse series of books by Dennis Taylor: the first in the series is We are Legion (We Are Bob).
Very good read as a hard sci fi series. Bob continues to make copies of his own consciousness as a means of creating a supply of individuals that are needed to complete various functions. Each new Bob has a separate identity but shares a history with the other Bobs from the time they were created.
This is more-or-less exactly what I think. Even though the copy would be physically different from me, and would (no doubt) have many minor differences in data, it might be the closest I could get to immortality, unless a better alternative came along.
With a sufficiently powerful computer system one could theoretically emulate reality at the molecular level, allowing a physical brain to be emulated with perfectly replicated behavior and functionality. And it probably wouldn’t even take that much - much of the physical body’s function is tangential or irrelevant to cognition and could be simplified out without impacting the accuracy of the emulation.
I don’t see how having it happen in front of you would make anything more obvious - unless the soul people are right and all clones turn up dead or something. In the materialist view the only difference between a person and a copy is that one would have continuity of existence and the other wouldn’t - though the one that didn’t would think that it did have continuity of existence thanks to its inherited memory and would only notice a discontinuity of location.
It seems obvious to me that continuity of existence is part of identity, so a copy of you isn’t you by virtue of the fact that you’ve been over here the whole time and they haven’t been. However in a world of star trek transporters where the original of you doesn’t hang around to dispute the copy’s claim to your identity there’s no reason a copy can’t step into your shoes and carry on where you left off.
IMO, there is no way to do this other than a complete brain transplant (which will be affected by age).
If it’s done by any computerized method it will be a copy, not a transfer. Meaning you die, and there’s a robot that thinks it’s you, but you’re still dead lights out.
This is an example of Mijin’s statement that both sides accuse each other of believing in souls. If there is anything non-computational in the human mind, what is that something? A soul? Something else? Perhaps we could call it wibble. So a human brain is a computer with wibble. How do you know that we can’t make wibble and add it to the uploaded computational representation of a human mind?
It eludes me why anybody would even want their minds to be “non-computational” - doesn’t that just mean that it doesn’t work in a rational or coherent manner? That it’s totally random? My thoughts happen for reasons, thanks very much. And even if brains do include some small amount of randomity, computers can simulate randomity, so no problems there. Whatever a wibble does, however a wibble works, it works somehow, and that “somehow” is a process and that process can be imitated and simulated. Doesn’t matter if the wibble is material or supernatural, that’s still the case.
So yeah, brains can be emulated, given sufficient understanding of how they work and sufficient processor power. That’s still a pretty weaksause approach to immortality though, because the person being copied isn’t going to live any longer as a result. They’ll still age and die, and experience aging and dying. Their copy may be off having fun in a virtual amusement park forever, but that’s not going to help them any.
Well, I gave an argument demonstrating that computation is subjective, and hence, only fixed by interpreting a certain system as computing a certain function. If whatever does this interpreting is itself computational, then its computation needs another interpretive agency to be fixed, and so on, in an infinite regress; hence, whatever fixes computation can’t itself be computational.
And there’s no need for souls, or anything like that; anything non-material or non-physical. Computation is really concerned with structural properties: we can simulate something because we can instantiate the right sort of structural relationships within a computer. But relations imply something to bear them, something that actually stands in these relations; but that doesn’t carry over to the simulation. After all, that’s what makes simulations so useful: if they replicated every property of the thing simulated, they’d just be copies. A simulated tree and a tree aren’t the same thing, and neither is a simulated mind and a mind.
It’s also a piece of foundational technology in the Takeshi Kovacs novels by Richard K. Morgan (“Altered Carbon” is the most famous one). Basically everyone’s got a ‘stack’ that records their conscious mind/memories in real-time. So if someone’s killed, dies, etc… that data can be downloaded into another body (a ‘sleeve’ in book parlance). From their perspective, there’s a certain level of discontinuity, in that they go from being killed to becoming aware in a different body, not even necessarily the same gender as they started with. They can also be backed up, much like computers today, in case their stack itself is destroyed somehow. In that case, their mind info can be downloaded into a new stack in a different body and they’re back, minus the time between the last backup and whenever they died.
What makes simulations so useful is that they can be created for free and it doesn’t matter how many times they crash/fail/explode as a result. Also being digital means you don’t have a giant pile of crashed/failed/exploded things left lying around that you have to dispose of.
Inaccuracy of behavior or fuctionality, on the other hand, is not a valued aspect of a simulation, and it’s bizarre to hear somebody say otherwise.
Huh? No, of course not. Causality is a perfectly distinct notion from computation, and what’s not computable isn’t therefore unreasonable. That’d be like saying that only what’s written down has a logical structure, but not that which the written text describes. Indeed, computation merely replicates the logical structure of whatever it implements, so this is just kinda backwards.
Anyway, what I want or don’t want really has no bearing on the issue of what I have grounds to believe is true.
Ok, so I guess you actually agree with my point, and are demonstrating the interpretation-dependent nature of symbolic reference by example. In that case, thanks!