Can I use cryonics to turn myself into a digital being?

Well of course not yet…

Let’s say for the point of discussion that I am expecting amazing breakthroughs in computer processing power and storage. Not just something like Moore’s law, but quantum computing, etc.

So, I want to freeze my brain like cryonics patients do, but instead of being revived as an organism, I want to come back as the Lawnmower Man. (For those who haven’t seen the book/movie - I want to come back as a digital representation of my former self).

Maybe I’d be in a future version of the internet, or maybe I could be built into a robotic humanoid. How about Bender or the “I, Robot” type.

On to the actual OP questions here - Do any cryogenics labs offer this? Can I donate myself to science and do this for free? Has this been proposed or discussed anywhere? I’m really not finding mention of this anywhere. Lastly, does it have a name? If not, I’ll propose:

  • Cryonobotology: Study of, of course…
  • Cryonobotics: Name for the field
  • Cryonobot: What your call the resulting robot/person

I can’t even count on expressing to my computer that I want it to print something. We are a long way from digital versions of anybody.

What would be the format of the digital version of you? What exact bits would you propose to represent?

While such a thing is likely physically possible, we can’t do it yet.

We don’t have scanning technology good enough to get that kind of detailed information out of a frozen brain, and even if we did we don’t have the computer technology to run a brute force every-last-atom simulation of a brain; we’d have to run a stripped-down model using only the actual brain structures that the mind is based on - and we don’t know what they are yet.

I think that’s what is key here: At best, you’d end up with a copy of you, not actually YOU.

The ultimate quest I think will be finding a way to preserve self when it comes to this sort of thing.

A perfect copy of you, IS you. At least when the copy is made; if two of you exist they will soon diverge. The fact of the matter is that the physical substance of our brain is constantly being replaced, so we’re all “just copies” anyway. The only differences here is that it happens all at once, and the copy is somewhere else than inside our head.

Identity is a human construct. It’s simpler to talk about the physical entities involved without trying to make them fit into artificial categories like “you.”

Right now, your body (and specifically your brain) is a lump of meat that can process information and that has a sense of consciousness and a sense of identity (you). Under your plan, that lump of meat will die and will no longer be capable of processing information. It will no longer have any sense of consciousness or identity (or anything else). At some point in the future, a lump of metal and plastic will be capable of processing information in a way similar to meat, and will be made to have a sense of consciousness similar to that of the meat that was frozen and to have a sense of identity the same as the meat, to possess the same record of information processed as the meat, and perhaps to believe itself to be the meat. The meat will not experience any of this, or anything else, after being frozen. It is meaningless to say that the lump of metal is or is not “you.” It will be similar to the meat in some ways, it will be different in others. It will not be the meat, and nothing physical will move from the meat to the metal.

The lump of meat now is similar to the lump of meat that existed five minutes ago, but in some ways it is different. If it were not different, it would not be capable of processing information or having a sense of consciousness or identity. Its sense of identity tells the meat that it is the same as the meat of five minutes ago, and in many respects this sense is accurate. In many respects it is not. It’s just a sense, anyway, not a detailed accounting. In general, the meat feels its the same as it was, and in general it is the same as it was. The future lump of metal and plastic, because it will be made to have a sense of identity like the sense the meat had, will also feel that is the same as the meat was. In some respects, it will be right. The metal may be happy about this. The meat will not be happy about this. The meat will not be anything. The meat-of-five-minutes-ago isn’t happy about anything now either. It’s not now; it’s in the past. The meat now can be happy about the meat then, but it can only be now.

Some of this may matter to your meat now. Some of it may matter to your meat in the future. None of it will matter to the meat when it is frozen. Nothing will. Some of this may matter to the metal. Some of this may not matter to anything. When the electrons in the metal stop moving, nothing will matter to the metal either. Someone may make another lump of metal it matters to. Or they may be able to make more meat that it matters to. Or maybe other lumps of other things will process this information in other ways. It won’t matter to me because I’m just a lump of meat, and I’ll be dead.

Bravo!
Superb answer. :cool:

The last sentence belies the rest, though.

All true answers. As for the philosophical debate over what is “me”, I define myself as the information in my brain being processed, with being self aware. My cells reproduce themselves every x number of years. Does that mean I’m a copy of myself in x years? No, I’ve taken a different form. If I get artificial limbs, lungs, heart, etc, I am still myself. My tissues are not ‘me’, my thoughts are.

And of course we cannot come close to doing this yet. But with quantum computing and similar advances in storage, let’s say for the sake of the OP that we can in 1000 years. So I just need my brain frozen until then. It will be much like time travel for me. I’ll wake up in 3078 as a robot. I won’t have to deal with pain or viruses (unless MS Windows is still around). I’ll be able to do many things my original body could not have done.

Philosophically, I do believe this entity would be “me”. And because my original brain would likely be destroyed during the copy, I’d be the only copy of myself.

Something that had been subject of research for a long time but is probably less so now is tissue vitrification. Freezing creates ice crystals which shred frozen tissue from the inside out. This is the problem with cryonics currently.

However tissue vitrification would have been a way to freeze something into a glass. IOW, freezing it such that ice crystals were unable to form. This was being pursued for the purposes of creating organ banks. Currently a heart or a kidney has to harvested and transplanted in a very narrow window. Vitrification would allow you to set up organ banks which would make organ waiting lists a thing of the past.

Unfortunately, AFAIK, this has gone absolutely nowhere. Beyond that, it now looks as if it will be possible to manufacture organs made to order from a patients own stem cells. This is far superior to transplanted organs so I suspect that further vitrification research will probably fall by the wayside.

In short, the probability that you will be able to do this, at least with any reasonable probability of success, seems remote at the present time. Certainly you can be frozen, but the trade off will be the virtual pulping of every cell in your body.

Will it be possible to one day reverse this damage during or prior to being thawed? IDK. It’s probably foolish to discount any possibility out of hand. Even so, this is one that is difficult to imagine.

I agree with the bulk of your post. But, you must go deeper to overcome the materialist/physicalist argument against a unique sense of self. If elementary particles are not unique and you replicate a brain in the exact same configuration as the original, such that, at least for a moment before divergence, the two brains are exactly alike, then the consciousnesses must be exactly the same, and with self identity piggybacking on consciousness, that too must be the same. The argument follows that the original brain’s sense of self has an equal chance to continue in either or both brains. Ask your average physicalist if he will use a Star Trek type transporter and he will say, “sure.” I would not stake my future in a new brain nor would I get in a transporter, and not necessarily because I’m not a physicalist (I probably am, for the most part).

Bolstering the physicalist position, is that the cells of the brain are continually being regenerated and therefore no elementary particles persist throughout a lifetime with which a unique consciousness can be bound to and also persist. But, is this necessary? I don’t think so.

Is there nothing unique about the consciousness of the original brain and the replicated brain? Yes, although the replicated consciousness is bound to identical particles…they are not the same particles. Even if we accept that all the brain particles regenerate over time, they would not regenerate all at the same time. Think of the consciousness as an entity that stays in continual contact with most of the necessary brain particles at all times. Your roof won’t collapse if you replace parts of your walls bit by bit. The original roof remains. Replace all 4 walls at the same time, however, and the roof will collapse. You can replace it with an identical roof, but it won’t be the same roof. Transferring your consciousness into any receptacle that is not the original will break the unique history of that consciousnesses’ self identity and create an identical, but unique sense of self (IMHO).

What people always seem to overlook is that everything exists in and interacts with the quantum vacuum and its exact nature is a complete mystery.

So even if you assume that consciousness doesn’t depend on any quantum mechanical effects like coherence, superposition, etc, the fact of the matter is that on the subatomic level, everything interacts with the quantum vacuum to a greater or lesser degree.

Aside from that, you can’t escape the quantum nature of the subatomic constituents of the matter that compose whatever platform will host your consciousness. While these can probably be ignored in the case of something as gross as a modern CPU where even with 22nm tracings the scale is too large to entertain any significant quantum effects, the fact of the matter is that any future device capable of replicating consciousness is likely NOT to be immune.

In fact it may turn out that quantum effects of one type or another are essential to the emergence of sentience.

Since quantum effects by their nature are stochastic or probabilistic, this means that no 2 systems can ever be identical, even initially and even were this not the case, divergence would be immediate and complete.

However this may not mean that it would necessarily matter. There are any number of mechanisms that might exist to preserve an effective state of consciousness even in the face of low level changes to the structure. Just as a trivial example, the rods in your retina can theoretically fire upon receiving the energy of just one photon of light. However you will never be able to detect a single photon. Your retina is designed such that a signal is only sent to your brain if multiple rods with the same neural connection are caused to fire within the same narrow window.

Right, “you” are not just your brain. There’s a whole host of finely tuned input/output devices attached to that. Things like hormones can be problematic too. You can’t just hook up a camera and send it to the brain program, somebody has to program the frame from that camera to be reencoded into a form that the brain can comprehend. That one sentence really doesn’t do it justice, you effectively have to find a way to encode a camera’s output into whatever slurry of nerve impulses and chemicals your eyes use. And then on top of that make sure the virtual nerves and chemicals interact with the correct areas of the digital-brain. Same with sound and any other perceptual system. Each sense, individually, is a monumental task to emulate.

“But wait, I don’t care about YOU people. I just want to use my brain to post on message boards.”

Well, that’s great. Now you’ve doubled their work. Now they have to find a way to directly interpret HTML pages into something your brian can comprehend, and interpret your digibrain output into whatever you intend (following a link, a web address, text, whatever). The most straightforward way to do this is… to figure out how wire up your digital sound/visual/etc cortex and convert the data into that. So now not only do they have to wire up your senses, they have to figure out a way to convert bizarre data that a real human mind cant interpret directly into perceptable data.

And further, as deltasigma mentioned, there’s stuff that is just going to be different. At the basic level you can’t get a 1:1 relationship with physical interactions (photons, quantum effects) on the brain and with silicon. Maybe it turns out it doesn’t matter that much, still, good luck simulating the endocrine system in a way that makes you still feel and act like you do now.

There is a potential work-around. If we make an AI that acts more or less like a human, there may be a way to convert data from a detailed body-scan into a format the AI program can operate from. You’ll still probably change a bit, but it might be “close enough”. It’s possible, but I sure as hell wouldn’t want to be one of the first few hundred thousand to try it before the kinks get worked ou ou ou ou ou ou SEGFAULT

Can we agree that only one self-aware version of you can exist at any one time? If you say no, more than one of “me” can exist then you must be taking the “I die every instant and a new me is born to replace the old one” approach to consciousness. I just don’t buy that philosophy, and you shouldn’t either. It is counter-intuitive, inelegant, unnecessary and doesn’t make the Occam’s Razor cut, in my opinion. I believe it is a placeholder philosophy, like Many Worlds, that just needs deeper knowledge to be replaced. If you accept that approach, you really aren’t saying you can exist in more than one version, you’re saying you have no future anywhere, including your own brain in the next instant. You have as much stake in you T+1-sec as you do in a random duck, let alone an exact replica of yourself. You can’t have your cake and eat it, too. If you do believe the real “you” can exist in multiple copies, you’ve created a paradox that can’t be resolved.

IDK. I was thinking about this in computer terms. Let’s say that I’m really a brain in a vat. Now let’s say that you split off all of my sensory inputs, wherever those come from (don’t tell me, I want to be surprised). Send those into a Y-switch so I still get all of the inputs I expect, but you also send start sending them to an exact copy you’ve made of my brain.

Putting any quantum mechanical issues aside, it’s hard to imagine how the consciousness of the second brain-in-a-vat wouldn’t be identical. What would cause it to be different?

This is one of those things where we frequently get hung up on trying to fit things into binary choices when it probably works better to have multiple options. I’m not sure we need to be binary with “can there be only one me?”.

In this example, I would say, it’s physically two different brains that probably have the same state (generally).

Trying to use one simple label to either describe or not describe a more complex scenario is more of a language/label problem than an understanding of concepts problem.

So are you saying that the scenario does not present a case worth considering, one that can’t be considered, or one that presents merely a semantic conundrum? Or none of the above. I’m sorry but i can’t tell