I'd like to merge my brain into a robot body. How long do I need to live?

This is a very , very very hard problem to solve.

Contrast the two approaches :

  1. Scale up existing preservation techniques for slices of brain tissue.
    Scale up existing microscopes for analysis of brain tissue.
    Scale up existing computer hardware for the massive scales needed to emulate a whole brain.
    Scale up existing brain models to include more parameters for an accurate simulation. Scale up existing robots to give the simulated people a body to control.

  2. Invent a technology that allows you to manufacture arbitrary machines atom by atom. Build entire factories able to self replicate, with thousands of individual nanorobtic systems that replicate macroscale assembly lines.

Note, you must build this huge factory without the benefit of having said factory, so you have to shuffle the atoms around 1 at a time.

Once you finally solve all the problems needed to design and manufacture nanorobotics, now you need to armor these tiny robots to survive the chaotic and dirty environment inside a living organism. Oh, also, not only do your robots need to survive, but they need to not trigger the immune system into causing inflammation or scarring in the area.

Your nanoscale neuron replacements then need to be capable of replicating all the chemical signals emitted by the biological neurons. Inconveniently, these signals are themselves complex peptide molecules made a completely different way than the components inside your nanorobot (to summarize 3 books by Eric Drexler : proteins make a lousy building block for a nanorobotic system, so no your nanorobot masquerading as a cell would not resemble a cell internally in any way)
I do think #2 is technically possible, but I think it’s a much harder problem to solve - perhaps 100* harder, perhaps so hard a problem as to be impossible for human minds to solve within a single lifetime.
#1 has lots of prior examples, where humanity has scaled up an approach to solve a much bigger challenge. #1 would cost a large amount of money, but you could start on the project today.

Which is why I peppered my posts with parentheticals and disclaimers as much as I did. #1 might get you an android clone faster, but it wouldn’t be you.

#2 would get you a cyborg, then android clone, but it would (arguably) remain you.

Which method would appeal more to the masses and have a vastly more profound effect on humanity?

#2 is just a more philosophically acceptable version of #1. An android clone of yourself is only “not you” to the “you” before being cloned. All your friends and relatives will not be able to tell the difference, nor will the post cloning “you” be able to tell either.

In principle, #1 should lead to a period of incredibly rapid change. There is no technical reason why these “android clones” could not run thousands, maybe millions of times faster than the original humans, or improve their internal thinking to have better than human intelligence and thinking efficiency. It would be an exponentially growing effect, the first bunch of clones could make incremental improvements to their own hardware, and that improved version would make further improvements, and so on.

The actual curve would be a decaying exponential, since ultimately the laws of physics limit how good the hardware can be, but in the first few years the process would be incredibly rapid. TLDR, this is one of the ways to start “the Singularity”. This would be a brief and exciting time where technology rapidly advances to the limits allowed by physics. I think it would happen over a few decades : at the beginning of the period, we have the triggering technology (there’s a few things that would lead to it), and at the end of the period, technology is very close to the absolute limits physics will allow.

We don’t know what those limits actually are, but we do know that most of the things that define our world are not among those limits.

Such as :

we are limited to a narrow slice of habitable terrain on a single planet. We cannot rearrange matter at will to make more habitable environments.
we are limited to very short useful lifespans due to flaws in our biological bodies. We cannot change the structure or apparent age of our bodies at will.
we cannot share information or techniques with each other directly, but must crudely explain things through speech and writing to others who must learn them slowly and painfully.
we require all manner of material goods to remain alive at all, and production of these goods requires a significant infracture owned by other people
we cannot rationally analyze a problem and develop a completely rational set of conclusions supported only by actual evidence (and easily prove others who do not follow such a process are wrong)

None of these things are absolute limits, but are just constraints imposed upon us by arbitrary past evolutionary events.

I see what you’re saying: The greater impact such technology will have on prosperity and the argumently predicted technological singularity.

I was merely proposing an option to work around copying a mind and letting the original die (that original being say you or I), or letiing the original be copied while also maintaining the original conscious entity (win/win). Yet, we don’t even have a convincing theory that self-aware, intelligent consciousness is an illusory immersion of the necessary cognizance our brains perform, or something very specific about the fine-grained, material “hard wiring” each neuron will net to the cognitive process.

Either way your point remains and understood.

What is “you”? It’s like the question about teleportation – from what we know of quantum teleportation, I believe the source information is destroyed and recreated at the destination. If this could happen at a macro level for a human being, is it your contention that the product at the destination is just some amazing facsimile? If so, then I suggest that every human waking from sleep every morning is just an amazing facsimile of a previous self, retaining the same memories, and not the original person at all. How the hell do you know what happened while you were passed out? Maybe you got transmutated in some way. Would you know, or care?

This seems to attribute far too much to the physical body as identity, which is just a very powerful part of our ingrained nature. We would far sooner accept a loved one returned in the guise of an identical body but a totally different mentality in every way, than we would accept the original mentality in a different corporeal form. This is just how natural selection has evolved us, even to the point of attributing characteristics to others based on entirely superficial physical attributes. This is why just about all Congressmen, CEOs, and board chairmen have terrific haircuts and look exactly like your idealized father or grandfather, but I digress. :wink:

But the reality of what makes each of us what we are as an identity does not consist of a broken toenail or a bald spot, but the accumulated experience that makes up our consciousness. Sure, we value our physical bodies and someone like a great athlete who is suddenly crippled would be thrown into a terrible realm of devastation; but yet, ultimately, he would still be the same person – and many who have experienced such grief have been able to raise that personhood to triumph over the physical handicap. Our identity is shaped by our physical being just like it’s shaped by our physical experiences, but its manifestation resides entirely in the brain, and it exists entirely as information.

Unless you’ve already stopped breathing, of course. :wink:

As mentioned above, that depends on what you call “you”. There is a philosophical question here that I think is unanswerable, because it assumes the existence of something that I don’t believe actually exists. (However the existence of that something can’t be resolved either.)

It would no doubt be more palatable to many, simply by sidestepping the philosophical question.

It’s not shared by me, but I’m sure I’m in the minority.

So you say, but you’re making assumptions about the nature of “you”.

I have no doubt that the version that replaces you gradually would have vastly more appeal! About profundity, I’m not sure.

It’s exactly that question. Admittedly, gradual replacement sidesteps this issue.

My position is that continuity of identity is an illusion supported by memory. I admit I’m making the contrary assumption about identity that cmyk makes above. I can’t think of an experiment to discriminate between the two definitions, which makes me wonder if perhaps we’re arguing over something that isn’t significant.

Anyway, we did this to death on the various teleportation threads. The bottom line, as I see it, is that people who disagree can’t make any convincing arguments to each other, at a very basic philosophical level. The issue revolves around the meaning of “identity”, especially when applied to something that is a process rather than an object.

I side with woflpup. I don’t have any arguments that will convince those who believe that there is some special attribute about a process that gives it a unique identity that is preserved across a discontinuity (such as losing consciousness, regardless if brain processes continue, even those that impart some semblance of consciousness), other than the memories stored.

There are lots of interesting thought experiments. They’re all beside the point of the question here, which is how long until we might be able to do this, regardless of our philosophical beliefs about the consequences.

I think we’re quite a long way off. We might have the processing capability in 20 or 30 years (though we keep revising our estimates of the processing power of the brain). But it takes a LOT more than processing capability! We’re still pretty ignorant about how intelligence works. What we’ve gotten a lot better at is doing a lot of the simpler things that intelligence does, though we tend to do it in a very different way (e.g., speech recognition, visual object recognition, spatial problem solving.)

My guess is that we may blunder into awareness the same way nature did: by heaping our solutions for specific problems together until something unexpected happens. Once it does, it might even take us a while to recognize it.

In any case, it’s all about ego, and I mean that at least two ways. The desire to preserve one’s identity past one’s biological death implies that one thinks that oneself is worth preserving. And of course, the philosophical question of what is the ego (the self) anyway?

Which is why I’ve been careful by disclaiming my mentioning of anybody’s “youness” is begging the question to begin with.

Yet it remains an unshakable conviction for myself, philosophically. I’d take the gradual approach, if I had any say, but ultimately, we just need a better theory for human consciousness.

My opinion is that this never happens. Not in 40 years, not in 400 years.

Even if we could do it, it sounds like the sort of thing religious people would get banned at the federal level.

Looking back at human history, especially scientific history, “never” is a very heavy word.

As a materialist, I find this question puzzling. What makes “you” anything more than a particular configuration of atoms and electric charges? It’s like the question of teleportation and whether or not it kills you. Seems rather silly to me.

Is anyone on this forum not a huge Douglas Adams fan?

(I hope not! :smiley: )

That assumes the continued advancement of science. I expect a new dark age (it’s happened several times before) of islam vs the world… Religious wars following global financial collapse within the next decade or two. People will be lucky to have electricity, let alone a computer to store their brain. The primary use for electricity will be to blast islamic calls to prayer out of loudspeakers. I hope to be dead before this happens.

It may seem silly, but it’s rather a much debated topic in philosophy, and is at the heart of the OP. There seems to be three camps on this thought:

Camp 1) Self-awareness, consciousness and the notion of an I is unique to your brain/body/physiology. A perfect copy down to every atom of yourself might think and act like you, but it’s really a clone that thinks and feels like it’s you. You are indeed actually two individuals.

Camp 2) All the same self-consciousness stuff mentioned above is an emergent property of any system close enough in emulation (at the least), and that you and only you would be aware no matter which conscious system was in operation.

Camp 3) Your youness is an illusion, and despite any consistency of this feeling in breaks of consciousness, this is merely an illusion, but there really is no you there. There never was, for anyone.

I’d put myself in Camp 1. I could see the argument for Camp 3. Camp 2 seems absurd to me.

There’s multiple competing nations out there. The first group to crack this problem and get electronic copies of some of their brightest citizens will have an overwhelming advantage if they are the only ones with the tech.

Frankly, I think superintelligence would be more useful as a weapon than nukes.

Tech is too useful, and the information it is stored on is too durable. The last set of dark ages, a lot of the administrative techniques and civil engineering of the Roman empire was lost. But, the Romans didn’t have something insanely and immediately useful like computers or AK-47s. During the “dark” ages, the powers of the time had no trouble retaining quite a bit of technology, such as how to smelt iron, agriculture and animal husbandry skills, and so on.

I’m of the opinion that another dark age became impossible around the time the printing press was invented.

As for the Islamic world taking over : what the nuts in that world have is a lot of irrational people, and some money they got because they just happened to have been sitting on a bunch of oil. Those “assets” are enough to be dangerous, but not the kind of thing that would give them a long term advantage.

Thanks for the replies. @Habeed: The full brain roadmap PDF is helpful. It sounds like it will not happen in my lifetime, so rather than eat lima beans and broccoli, I think I’ll eat lotsa meat, drink beer, and live life to the fullest. Then I’ll get my head frozen at Alcore instead.

When I awake and am on the robot side of the inevitable human-robot war, all of you will be spared from working in the robot mines for your help in the matter.

I disagree. The average person doesn’t even understand how most tech works, and could never make it themselves. If you went back in time to the year, ohhh… 1500 or so, what modern inventions would you be able to make or instruct others to make?

Religious fundamentalism is spreading faster and faster. Not just islam, the number of Americans who don’t believe in evolution is at an all-time high.

But the main reason I think islam will take over is because most Westerners are crippled by political correctness. Trying to prevent muslims from destroying America is “racist” and most of the people I talk to would rather die tolerant in a nuclear holocaust than have someone insinuate that they’re a bigoted racist. Never mind that islam is not a race.

If the soviet union had called communism a race, and anti-communist sentiments “racist” we’d all be speaking russian right now, starving in bread lines.

Islam doesn’t need smart leaders, grand military might, and expensive assets to destroy the west. Our ignorant political correctness, coupled with a small amount of taqiyya, and the stereotype of muslims as “brown people” (thus making opposition to them “racist”) is all that’s necessary. That and cheap explosives, guns, rocks, sticks, and fire.

The last time they carried out a major attack, over 1 million people were killed in retaliation. Lancet surveys of Iraq War casualties - Wikipedia (admittedly, virtually all of them had nothing to do with it, but the point remains)

You’re mostly talking about France, and their political correctness wherein they allow huge numbers of Arab immigrants, some of whom seem to do nasty things.

I don’t disagree that political correctness is counterproductive. Not every nation does it (China, for one). I’m disputing your assertion that a worldwide dark age will come about. Maybe the politically correct nations will fall, but numerous advanced societies do not have such beliefs.

Who doesn’t want to be Bender the robot?

The hell does the OP have to do with Islam?

And I wanna be Bender but with more apathy.

Simple. For this kind of far off technology to ever be developed, humans have to keep developing tech and not let them selves be overwhelmed with irrational religious idiots.

Religion is the reason we don’t have this technology, or at least we aren’t pumping trillions of dollars into developing it.