But there are some idiots intent on trying it by the end of the year.
No soul required. Your reference point is gone. This is the transporter problem all over again.
To the rest of the world, if my brain could be copied into an android, I have gone on living. But it does nothing for me. I’m not having new experiences, my copy is.
Of course, for all we know, the same thing may happen every time we go to sleep. After all, our consciousness takes a few moments to “boot up” when we wake up–maybe it is being reconstituted anew each time, and every time you go to sleep, you “die” and are replaced by a new instance.
:eek:WooooOOOOOooooooOOOOOOooooooOOOOOOooooo!:eek:
This. I understand Stranger’s saying we don’t know anything about what might be involved in the science and technology underlying this and so we cannot make any meaningful statement on its possibility. But using that as a basis limits our abilities to say anything about anything. We don’t know whether teleportation is possible. We don’t know whether time travel is possible. We don’t know whether faster-than-light travel is possible. We don’t know whether levitation is possible. Heck, we don’t really know whether phrenology works or if LBJ’s clone ordered the Kennedy assassination or if unicorns can fly to the end of rainbows.
I’m reading all the robot and cyborg stories in the history of science fiction. (Yes, all of them, if I can get my hands on them, starting from the 19th century. At some point I hope to put out a comprehensive list. I have about 200 entries up through the mid-60s.) Virtually every one puts the problem in exactly the same conceptual terms as faster-than-light travel or time travel. It’s simply a plot device, a what if not much different than what if Lincoln hadn’t gotten shot. Some of the better authors try to say something about being human in the course of the story yet even those fall far short of what would normally be considered a good mainstream story about daily life. (The only exception: “No Woman Born,” by C. L. Moore, and that mostly succeeds by ducking the issue.)
The issue is not that we don’t know the technology. It’s not even that we don’t have a understanding of consciousness. We have yet to come up with a basic definition of being human, of being sentient, of being alive. We know nothing about existence. Certainly we’ve solved problems in the past without understanding the underlying science: we don’t know exactly how anesthetics work but they knock me out just fine. But we’re not asking a technological question like whether men will ever build a heavily-than-air flying machine. If we might as well be debating unicorn behaviors I’m tempted to equate “maybe someday if everything changes” with an operational definition of “never as far as the eye can see.”
“Continuity” can be achieved by interfacing with the original at the moment of death. You would still die, it just wouldn’t be quite as fatal.
I still think the odds are pretty slim, but balanced against the certainty of the alternative…
I don’t expect to make it to when it becomes a viable possibility for me.
But Og knows how such a measure would be in its first form – what if it is to be some sort of Cloud existence, would a mind that has always existed embedded in a physical body be able to take it?
Great post…first paragraph especially.
And for your science fiction list I suggest the early works of William S. Gibson, Neuromancer in particular. Sort of a melding of virtual realty, AI and what is consciousness. (least thats what I understoood). In thinking about it, it maybe that virtual reality may hold some promise in figuring out what makes the brain function as it does. Pump in all those inputs: sight, sound,smell, taste, skin feelings etc, see what circuits in a human brain lights up.
Im just an armchair thinker…aka modern serf…
ps to others
regarding the brain transplant? Uh yeah…probably hard to connect the output to input cables… maybe some kind of bio/nano/glue might work though?![]()
And the transponder problem isn’t settled.
What is your “reference point”? What makes you the same person as yesterday’s Sterling Archer? What defines you?
If I’m not mistaken, there’s a smallest unit of time, the Planck time. So, you’re you. We go on to the next Planck time. There’s this person who is almost identical to you. Why is he the same person? If he is, why is an exactly identical person created at this point not you?
You perceive continuity from one moment to the next. Both the “next Planck time” person and the newly created person perceive this continuity. They also are indistinguishable from you by any measure (or at least nearly indistinguishable in the case of the one you think is still you). What makes you think that one is you and the other isn’t?
Is the goal to make an emulation that works exactly like you do? Or to copy the synaptic wiring and use a simplified model that is sentient?
Simplified models work. This is why neural networks can drive cars and recognize speech and perform other tasks. *Probably *everything else we are capable of can be done with simplified models, the tiny details of how our synapses work are probably not actually needed for intelligence or sentience.
I personally think that there will be rapid and accelerating progress with simplified neural network models. If the brain is 1000 subsystems, these would be “AIs” that use maybe 10. They are also hybrid systems - for an autonomous car, you use neural networks to process sensor inputs (from cameras and lidar), but you use an internal state model that is programmed conventionally and tracks the predicted physics of the car and surroundings, per conventional math.
To illustrate what I mean, if you built an AI that played baseball, it might use a neural network to recognize the ball, but once it has identified the ball in each video frame, it would calculate the exact speed of the ball from the movement between frames, and then use a model that includes the equations for momentum and kinetic energy and the elasticity of the ball to predict the outcome when it is hit by the bat. The robot would then queue up a precise sequence of servo power curves to reach the desired bat speed at impact.
Anyways, look, not dying biologically may not be possible. I don’t think we should quietly accept oblivion without even trying to avoid it. There are 2 key pieces of progress that may be possible within our lifetimes.
-
We could prove with simplified AI models that true human emulation is possible. And not within some ridiculous number of centuries, but some reasonable number of decades.
-
We could work out some way of preserving brains that allows those of us who die from failing biology to still exist, at least as a frozen or plastinated rock, that could be later converted when the technology becomes possible. Maybe it wouldn’t be a great experience to recover from that - after all, it’s not you, it’s a copy - but it probably would be better than nothing.
The history of human experience contains countless efforts to preserve part of yourself into the future. From writing books to a carving on a tombstone to a pyramid, a little bit of yourself survives. An entire preserved brain of yourself is the ultimate form of that. Maybe you’ll be dead, but future people would still have access to basically all of you.
One interesting thing with the transporter debate is that both* sides are convinced that the other side is positing the existence of a soul :smack:
- There are actually more than 2 resolutions of the transporter problem, but I’ll wait for the thread to be unequivocally hijacked before I go into it
Oh, just go start a fresh one and link to it from here. We haven’t had a 30-pager on a non-politics topic in just months and months. 
So I’ll ask : why is a “soul” relevant. Play this out a bit. So you’re transporting around the universe, being destroyed and recreated. Each time you materialize, if souls were real (and there is not evidence to support that), what do you think happens? Does the new copy get a new soul and the copy who existed for 10 minutes go on to the afterlife? Does God recognize the resource transfer and move the soul over?
Either case is indistinguishable and not really a problem either way. No, bad science fiction would be that God hath decreed that only 2 people who fuck get to create a baby with a soul (which is why religious people are against abortion even at the very earliest stages) and thus this process would kill the original and created a soul-less copy who is I guess irredeemably evil.
As a side note, if this were true, this is great news. We’d actually have discovered something!
Anyways, there’s no evidence for souls. It’s just fantasy talk, there’s no evidence there is any way to reverse gravity or create negative energy (needed for wormholes) either.
What there is evidence for is that :
a. Synapse in the brain perform a computation, and that computation is cheap and easy to model the core functionality for it.
b. Artificial synapse systems, called artificial neural networks, are very simplified models compared to what they are based, yet it is indisputable that they work and some experiments have led to brain-like functions and behavior
c. There are trillions of these synapses forming functional subsystems in the brain. It is likely that if you built a computer with enough actual hardware and software to model all these critical parts, that computer would probably exhibit behaviors we would call “sentient”. It would probably be able to pass Turing tests and probably be capable of most tasks humans can do.
The actual reason case c has yet to happen is hardware. Turing’s model for a computer requires adequate memory or the computer cannot model a system. You need an incredible amount of memory. Let’s approximate this : brain has ~86 billion neurons, per recent research. About 1000 synapses per neuron on average. Say you use 256 bits, or 32 bytes per synapse to model every state variable. (this includes learning states, the present state, the average concentration of regulatory molecules nearby).
You need 2.75 petabytes. That’s RAM. And not just any RAM - think how a synapse works.
Every single update cycle, the synapse can either fire or not fire, depending on whether signals arrive. If the synapse has an electrical charge above resting state, each emulation cycle that charge decays a bit.
If you are a chip designer or have some idea of the engineering, you’ll realize that means that a computer that emulates a brain needs to access some memory for every synapse in the entire brain every update. The CPU architecture we use now assumes that most RAM will not be accessed most of the time. Only a tiny active region is being worked on at any instant in time, and that gets cached at various levels inside the processor itself.
What you need for a brain emulator chip is more like giving each synapse a tiny bit of memory and a dedicated section of circuitry that updates it every clock cycle. It would look radically different from any CPU or GPU commercially available - and would be many orders of magnitude faster at this specific task. That’s kinda sorta what IBM and Google are working on - they have custom chips meant for this task.
That’s what it will take. My own calculations say it can be done, that for a few percent of the world’s refined silicon, a brain emulator could be built today. If you could pattern silicon in 3d - and no, I don’t know how to do that, all the processes are meant for 2d - theoretically such an emulator wouldn’t need to be much bigger than the brain itself. But what you would need for early brain emulators is a *lot *of fiber optic wiring. There is a way to built fiber optic transceivers directly into a chip. You’d use a chip library where different brain emulation chips are optimized for different regions of the brain (since the synaptic wiring pattern is very different, you optimize a chip to model a particular region efficiently). Then you would need a very large amount of fiber optic interconnects to send the inter-region messages around. I can only vaguely draw on a napkin that the fiber optic bundles and dedicated control chips for network flow would probably dwarf in size the emulator chips themselves. Just like in the real brain - most of the bulk of your brain is wiring.
You must really hate going to sleep.
I agree with zero, and here’s why: if you can “upload” consciousness, then you should just as easy be able to “copy” it - but then you should be able to control all of the copies (otherwise you can’t transfer consciousness in the first place), so does that turn the copies into a hive mind?
I think the issue is conscious self-awareness, not sentience. After all, people want to “live on” in an upload, not just be a useful expert system.
And we don’t really have much of a clue yet how consciousness arises. It could be something that would arise in an artificial neural network. But then again, I think of Adrian Thompson and his evolved circuit experiment. After a series of mutations and tests something that did the assigned task well evolved–but part of the circuit was completely disconnected from all electrical input and yet was absolutely essential to the circuit’s operation. And setting up the same arrangement into supposedly identical programmable chips didn’t work–the circuit worked only in the exact chip it evolved on, and depended on incidental flaws in that individual chip that aren’t exactly matched in other chips of the same model. There could be something about consciousness that is analogous–that takes advantage of some feature of wet meat and won’t work in anything else.
This. I’ve been thinking about this, off and on, ever since becoming a parent. A six-month old baby has obvious desires, preferences, feelings, at some crude level volition.
What IS it? How the fuck did it GET there?! We don’t know the answer to that - we have no idea how to start figuring out the answer to that. So we have no idea how to make a computer give a damn whether it’s on or off, let alone care about whether you’re working on a spreadsheet or watching a Youtube video.
I don’t see our making much progress on this before I shuffle off this mortal coil, probably somewhere around mid-century. Maybe they’ll make a breakthrough in comprehension a century or three down the road, but I won’t be around to see whether that comes to pass.
One reason that I’d kinda like to be uploaded into an AI existence, were it possible, is something I’ve been saying repeatedly in threads about manned space flight: space is for robots.
The degree of difficulty of protecting and sustaining our fragile bags of flesh in outer space is insane, compared to what it costs to do the same thing for the probes that we send to distant corners of the solar system. So if we want to explore space, the best way to do that - hell, about the only way with much chance at all - is to embed our consciousness into robots.
So it is kind of a pity that it’s not on the horizon. But it’s not.
Thank you.
I bought *Neuromancer *when it first appeared as part of the Ace Science Fiction Specials line in 1984. And Count Zero and Mona Lisa Overdrive when they came out as paperbacks. That’s how old I am. All three are signed by Gibson, too.
That evolved circuit experiment was a setup where the circuitry lacked many of the protections and complexities that neurons have to prevent this sort of thing. Actual neurons are in bundles and it takes statistically relevant numbers of them firing in order for a signal to continue. This helps filter out low level noise, which is what caused these bad results in this experiment. In short, the experimenter created a situation via a set of rules where the optimal circuit was terrible. Slightly more complex and better rules do not have this flaw.
As for consciousness - again, if you don’t believe in magic, you must realize that this is simply an artifact of complexity. Once you have enough brain systems to process information to very abstract forms, you arrive at a process that apparently can be self aware. I don’t expect this is anything special or magical, the sole reason we haven’t duplicated it is we haven’t built a reliable framework and a complex enough artificial intelligence to exhibit information processing at this level.
So duplicating consciousness is simply a matter of discovering the rules for brain organization by region, discovering the rules for individual synapse firing and learning (both sets of rules are very simple and can be described in a few hundred characters of math), and then to build a system that has all of these components connected and functioning.
The fact that the brain is so robust implies strongly that you only need to get close. People who are drunk are still conscious. People who have a bullet pass through their brain sometimes survive and recover and are still conscious. Huge variances in brain genetics seem to still result in people who are still conscious and sentient.
Prove it.
Why are we talking about simulations? I want the real thing! Upload me into a robot body with mechanical legs, hands and arms, electronic eyes, ears, olfactory sensors, etc. Add some more features like flight or infrared vision, cool, but don’t put me in a glorified nursing home box with what amounts to very HD TV playing 24/7. The whole point of cheating death is lost if you can’t hang out with your friends and family down at the park.
So? Your mind today is just a copy (with modifications, of course) of yours from last week. Nothing material about us is permanent. Only the organization. And the information contained within that organization.
Continuity isn’t the key, either. Ever had general anesthesia? Or heard of people who died temporarily in the hospital? Minds stop and start fairly often nowadays, even if you don’t count sleep.
I don’t think we’re talking about divorcing the mind from substrate, simply replacing the meaty, carbon-based substrate we’re all currently using with more durable silicon and steel.