It seems to me that “modeling” a human brain via software that emulates the state of every neuron is hideously expensive and wrongheaded. It’s like modeling a cloud by keeping track of the state of every molecule of water and N2 and CO2 and O2 and dust particle, and all the interactions between them. And adding just one more molecule increases the computational complexity by the factorial of the number of molecules.
If we really wanted an artificial brain that worked like a human brain it’s probably better to use components that work more like human neurons, not transistors. Then you hook up 302 neuristors and wow, it emulates a nematode. Hook up a 10,000 and you’ve got something as complex as the brain of a sea slug. It would take a million for a cockroach, 200 million for a rat, 5 billion for a monkey, 86 billion for a human, and 250 billion for an elephant. Oops. I guess number of neurons only correlates loosely with intelligence. Although those elephants are clearly holding out on us.
But whatever happens, we have got
The Maxim gun, and they have not.
Also, the task of inventing neuristors is left as an exercise for the student.
What makes you think silicon is more durable? Your brain can run for several decades before it begins to wear out. Think you can say the same for a CPU?
You can have backup CPUs. Or load yourself onto another one before the first wears out. If you think we can do that with brains let me know. That’d be cool too.
Sorry I was out a while, need to reply to some slightly older comments.
No, you’ve misparsed me.
The point is, to most people (including me), arguing from science and rationality, “soul” is an inconsistent and incoherent hypothesis, completely contrary to our understanding of the brain.
So saying “Your argument depends on souls” is saying “Your argument is unscientific nonsense”.
In the transporter argument, it basically goes like this:
“Transported” advocates say: If the person at the destination is not you, what does he lack? Is it a soul?
“Not transported” advocates say: When we normally have two identical entities we say we have two, separate entities. Why would we treat the brain as a special case in this regard? What association is there between the two brains…is it a soul?
There are actually compelling, non-soul based, arguments against both positions (More arguments against the “Transported” position I think, but it doesn’t matter; a single compelling counter-argument is enough).
There is a third option that right now is resiliant to counter argument, but it’s not a pleasant one: there’s no continuity of consciousness, ever. Consciousness is an instantaneous phenomenon, and the fact that it feels like you’ve lived for N years is just a symptom of having access to N years of memories.
If you can think of a counter-argument to this position, I’d be grateful.
This is a negative way to look at it, but it’s a practical view. This means that all that has to happen in order for you to live on is for those N years of memories to continue to exist. Having your brain preserved in a soup of chemicals and chilled or converted to plastic, and then later scanned is perfectly acceptable from this philosophical view. As messy as the process sounds, the end result is a being with N years of memories, just like right now.
But this is not what this hypothesis is saying. It’s not saying that in order to be you you just need memories to be the same.
It’s saying there is no concept of “live on”, even now while you’re alive.
If having the same memories is enough to “live on” then that’s the “Transported” option, and we can discuss the various counter-arguments to that position.
No I love it. In fact, I bought one of those sleepy time CD sets, with the sleepy music on one disc, or else falling rain or ocean waves. It works!
It isn’t as if my brain ceases to function during sleep. Now, if you disintegrate my body and make an exact copy somewhere else, I’ve just died and a new, basically identical consciousness has been created.
Same if you copy the contents of my brain into some computer system. The actual consciousness is in the body. How can that be removed from the body? Throw away the body and the person is dead, no matter how good a copy of the contents you have in your machine.
It just strikes me as a ghost in the machine fantasy that the ghost itself can be separated from one body and put into another. I see that others seem to believe this and I can’t say I am 100% certain, but my view is that some of you have read too many stories.
Also, wrt to Planck time: I don’t think there is a universal “Planck Clock” such that every particle in the universe is advancing in lockstep, one quanta to the next. I think each particle is in its own rhythm which overlaps with others, and so there really isn’t a “frame-by-frame” effect going on at the organism level.
General anesthesia doesn’t kill a person. I think a body temporarily “dying” and being revived hasn’t really died, and anyway even if you can be persuasive that it has, a body is built to contain its original consciousness. I think the transporter example is suggesting that we will have the technological ability to create a consciousness, maybe one that is an exact copy of you, but again, it won’t be you because you died during the disassembly process.
Does anybody think we’re going to have a technology like a transporter anyway? That, too, seems like a fantasy. 3D printing has a long way to go, but to spontaneously recreate a human body down to the quantum level seems beyond merely tricky.
One would expect that a prerequisite would be the enormous computing power which those who understand these things claim that quantum computers will create. Supposedly, they could solve problems which would take a supercomputer as much time as has existed since the creation of the Universe. Don’t ask me. This is just what I have read.
Since being able to create a human brain atom by atom would not violate any laws of physics, it would seem possible in a century or two. Or three. Or a millennium. Maybe. Unfortunately, too late for present day Dopers.
There have been many threads on this on the dope, and they usually run to at least 10 pages to get absolutely nowhere.
Before linking those, I recommend at least reading the basics on personal identity. Because, frankly, it is one of those problems that seems most simple to people who haven’t actually given it much thought.
I’m pretty sure you and I have attempted to hash this out before, and failed to reach any understanding, but I’m still going to respond to this anyway, possibly because I am a masochist.
So let’s imagine there is an identical copy of your mind and consciousness placed in a new body, but the current you gets to go on and continue to exist. Do you see any difference between the experiences and memories that your copy will experience in the future and the experiences that your original body/mind will experience? If you’ve always wanted to go to Paris for example, and you don’t get to go, but your copied self does while you are at home working, does that fulfill you?
Not directly. A practical transporter would be a lot simpler. If there’s 86 billion neurons, average of 1000 synapses each, and you need 256 bytes to represent each synapse state, that’s 22 petabytes of data. Or 22,000 1 TB ssds if you want to think of it that way - a fairly big pile of equipment by today’s standards but nothing unfeasible in the foreseeable future.
That’s the part you need to send. The exact details of your body are mostly irrelevant - a generic human one that has a mapping to every receiver/transmitter in your sensor/motor homunculi would probably work fine. So you send that, and then there’s about ~300 neurotransmitter-receptor pairs and there are a number of glands in the brain that provide special functions. So you would probably send some sort of compressed data describing a hardware circuit to implement each of the 300 receptor pairs, if the receiving device doesn’t already have that information stored, and a set of manufacturing specs for your robot body - if the receiver doesn’t already have that information.
The receiver uses an algorithm to generate procedurally a hardware design for the circuitry needed to run you. It then sends the hardware design to a software system that determines how to manufacture that design using available fabrication equipment. In essence it’s a hyper-advanced 3d printer - you can generate the codes for moving the print head from a 3d model - though many other manufacturing steps that are quite distinct from current 3d printing technology would probably be needed. It also builds an instance of your robot body the same way, or more realistically, allocates a virtual reality environment for you to inhabit, since that is more efficient.
Can you send this kind of data over interstellar distances with a laser? Fiber optic speed record is about 256 terrabits/second, so if you could achieve that speed in open space with an amazingly tight focusing mirror or gravity lens, it would take just 10 minutes to send the key data over interstellar distances. You’d need to also include redundancy data - algorithms exist where you pad N data bytes with M redundancy bytes, and so long as at least N data bytes are received successfully at the other end out of the total N+M, you can regenerate the correct message. So no need to wait years for a re-send so long as enough of the sent bits made it through.
So yes, a teleporter is entirely feasible and the technology to build one is plausible. There are some assumptions I have made.
I am assuming the human brain uses no magic or “souls” or any other bullshit.
I am assuming that it would be possible to build an equivalent piece of machinery, out of silicon or structured carbon, to a brain that would be about the size of a brain or a just a reasonable size factor larger.
I am assuming that eventually we’d be able to cram all of the components for a self replicating factory into a starship and actually get it to another star, where it could then use the seed factory to regenerate the industrial capabilities that created the starship in the first place. (from various rocks and such in orbit around the destination)
And of course, every time you “wake up” having been copied in a future society with this kind of technology, you have to deal with whatever negative feelings you have about having “died” and been recreated as a copy. If you think about it, such feelings are not useful and one expedient thing to do would be to remove your capacity to be upset by them. The most successful beings in such a universe would have no problem with copy-suicides whenever it is convenient.
And you can claim someone else doesn’t understand science over the internet all you like, without providing any evidence. I’m just going to come out and say that every assumption I have made fits current models of science as accepted by the overwhelming majority of credible scientists. I will support, with references, any specific claim you want to argue about.