Mind transfer question: preservation of personal identity vs dichotomy between versions

First of all, let me say that “philosopher” can mean many things, and that cognitive science is an inter-disciplinary field where many of the key players hold that title jointly with other academic titles. There’s a difference between a contrarian philosopher like Dreyfus (and Searle, to some extent) and the rightly well-respected individuals you mention. The former seem to have major gaps in their understanding, whereas Putnam and Fodor and other key players have been closely associated with empirical research. Whether or not they’ve been “doing” science, they’ve definitely been synthesizing real science.

As for the idea that CTM is just airy-fairy philosophy not grounded in empirical science, I would suggest that, just for one example, the phenomenon of mental imagery has been among the most intensively investigated in experimental psychology. Page 3 of this paper on mental imagery, for example, highlights very significant differences between how we process mental images and the traditional pictorial paradigm favored by theorists like Kosslyn. The central question here is, do we process mental images in the same way as we process visual retinal images, or do we store and process them representationally, as computers do and as CTM would posit? The cited examples provide empirical support for computational-representational theories, which is about as close as we’re going to get to a resolution in the foreseeable future. In particular, the evidence contradicits the pictorial model, while support for CTM-like representationalism is strong, reinforcing the point Fodor made in my previous quote about the explanatory power of CTM.

This is true, but “there have always been metaphors” is an argumentative fallacy. It’s not a refutation of the CTM argument that “computation must not be viewed as just a convenient metaphor for mental activity, but as a literal empirical hypothesis”.

Aside from the rather significant point that I never made that claim, my only point there was against the claim the uploading the mind to a computer was impossible, or whatever the exact words were in the claim. If one can truly declare something to be impossible, rather than just difficult or not presently technologically achievable, then one must be able to state the theoretical grounds for the claim. The present situation is that we just don’t know, but from an information theory perspective it’s at least plausible. Which is a very far cry from “impossible”.

What exactly does the paper you have cited prove? Explain it to me like I’m a fairly intelligent child. All it seems to me to say is that the way we process visual information is complicated, something I didn’t doubt.

I never intended to refute the CTM model. I’m asking you to prove it. Again, is it a falsifiable hypothesis and what would falsify it?

I would love it if my mind could be uploaded to a computer. Who wouldn’t want to live until the Sun swallowed the Earth? Or possibly until all matter in the universe decayed? But please show me how this is possible.

Well, we have an established model that the brain is the effect of thousands of physical computational circuits. We do in fact know for a fact that consciousness is far too complex a phenomenon to be hosted on a single neuron. The proof in that is the information carriage capacity of a single neuron’s single pulses separated in time with a gaussian distribution of noise. (I can go into far more detail if you want a rigid proof but you probably realize this as well)

We do know for a fact that consciousness most likely requires the collective activity of millions of neurons working collectively. Each is obeying their own individual laws of physics and programming and is in itself unaware of it’s contribution.

So, ahem, I gave a specific example of a consciousness-preserving model above that is consistent with this theory. Aka, a ship of Thebes incremental replacement.

No credible scientific theory says this *won’t *work. We don’t need to know the exact firing pattern of information flow that creates what we experience as consciousness to conclude it comes from individual components that obey simple rules, since we have definitive experimental proof of the latter.

Yes, “it’s complicated” is a reasonable conclusion, but that wasn’t the question. ISTM that as your issues get addressed, you keep moving the goalposts and asking different questions. Remember that going way back, the original claim was along the lines that uploading the mind to a computer was impossible. My challenge was that if someone makes a claim that something is truly impossible, they must be able to provide a theoretical basis for that claim, rather than just saying “I don’t believe it” or “I don’t see how it’s possible”. I can say that traveling faster than light is impossible, and cite the properties of spacetime in special relativity as the basis. Claims of impossibility require the same theoretical basis, or else they risk falling into the same dustbin of other claims of impossibility of things that have now been achieved.

My previous response was to your allegation, as I read it, that there is no empirical evidence supporting CTM and these guys are all just a bunch of philosophers. I showed you that evidence exists. So the goalposts get moved again, and now you want “proof”. No, there is no “proof”, else the hypothesis wouldn’t be controversial. But yes, it’s falsifiable, and it falls into the same category as many other scientific theories and hypotheses that remain unverified. Why? Because, as you say, “it’s complicated”. That doesn’t mean it’s not a solid hypothesis with explanatory power. As Fodor implies in the quoted piece, it’s hard to imagine modern cognitive science without CTM, yet it’s probably far from a complete explanation and we may find, for instance, that some cognitive processes are indisputably computational while others are not.

Here’s another example of mental imagery supportive of CTM. It hinges on the question of whether image processing is influenced by our knowledge and beliefs. We find that when we look at an optical illusion like the Ponzo and Muller-Lyer illusions illustrated, the illusion persists even when we know for a fact that the lines are exactly the same length. Yet the experimental evidence shows that mental image processing is very much subject to such beliefs, showing that the “early” visual processing of perceived images is quite different from the higher-level cognitive processes that operate on mental images, which are supposed by CTM advocates to be computational operations on symbolic representations.

Yes we have an established model of the brain as being a neural network. As someone with a postgrad neuroscience degree, I am aware of this.

But we are talking about something more specific: personal identity.

And “established model” is not the same thing as a promising model, or one for which we can’t say “won’t work”: an established model is one that we can use to make accurate predictions that we would not otherwise be able to make.

What kinds of questions might we ask about personal identity?

Well, the example I sometimes give is what if our brain-to-computer process includes some errors? If we had a theory on personal identity, we would be able to apply it to answer the question of whether personal identity was preserved in this situation.
But we don’t. The normal handwave on this is just to say that we know the human brain changes moment to moment so we’re supposed to assume that as long as the number of errors is the same or smaller than such changes, personal identity is preserved. But such an “it stands to reason” answer is scientifically inadequate.

Why, exactly? It happens to all of us, daily: brain cells die, and in significant numbers. And yet the vast majority of us continue as “ourselves.” This would seem to be remarkably convincing empirical evidence that some level of error in duplication does not interrupt the continuity of personal identity.

It is worse than that, I think. It is natural for our minds to contain some errors, because of the curious nature of the brain and body. Over time those errors get more profound, but we don’t notice this because our biological systems compensate for them. Sometimes we notice that we’ve forgotten or misremembered a series of events, or lost a skill that we previously possessed, but we generally accept this and carry on regardless. We adjust.

A mind which as been transferred into a different substrate, such as a computer, need not suffer these changes. If necessary the uploaded mind could go back and interrogate the data that it possessed in the past, and revert to an earlier state if necessary. None of these are functions that a biological human mind is capable of.

If uploading is ever possible, the end result will cease to be human very rapidly, and bear little resemblance to the original after a short period of rapid adjustment. So even if the process could be done, the end result would not closely resemble immortality for a human mind.

Ok. In this wall of text I don’t see any arguments besides your appeal to authority that it’s “scientifically inadequate”. It would be real nice if we could emulate a whole human brain and ask it if they feel the same as before. But short of that, we have to arrive at our conclusions based on smaller scale experiments.

Otherwise you could conclude that a manned mission to mars being possible is not really supported by science, since we haven’t done it. Never mind that we can make spacecraft that carry people, and complex machines that run for years, and we’ve sent spacecraft shaped objects to Mars before. Any reasonable person I’d consider credible would conclude that you can in fact do it.

Similarly, since we can emulate small pieces of the brain, and we’ve actually measured the information carriage capacity of individual axons (not much) and we know what diffuse glandular signals involve (not much information is carried), we are as certain as we are that electrons exist that the brain is a system made of simpler piecewise components. Thus, if you can emulate the components and build a full scale machine to do it, you will get the same (or indistinguishably equivalent) answers as before.

Not really sure where your uncertainty is coming from. Are you certain we can build a bridge across the Bering Strait? Are you certain we can build a Moon base?

All the efforts would use pieces of things we demonstrated, they are just too large scale and too expensive to have done yet.

Maybe you’re trying to say that because we don’t know how concrete will do in the arctic where the Bering Strait bridge would have to be, we can’t be confident the first bridge will work? That’s what you mean by an “unknown tolerance for errors?”

That’s nice, but it doesn’t say the bridge will fall down, it just means we don’t know the exact tolerances or what materials we’d use to make it at this juncture.

And maybe the first few thousand preserved brains that people try to scan and emulate will all fail. Oh well.

Firstly, as to the “why” is because that’s how science works. We have a model, we use it to make falsifiable predictions and then once it’s been repeatedly experimentally verified, that’s when we say it’s an established model / theory.

There is no burden of proof on me to say why it is not correct.

But secondly your argument is based on our intuition which is notoriously unreliable. Yes, I “feel like” I am one and the same person who was writing the start of this sentence, and one and the same as the Mijin playing with toys in Kindergarten some 35 years ago. But that’s not a model, that’s a feeling, and we can’t conclude anything from that other that the tautology that that’s what the feeling is.

Firstly, I don’t really need an argument, per se, since my point was simply that this is an unsolved philosophical problem at this time. There is no established model that tells us one way or the other, fact.

But secondly, I think you’ve misunderstood the problem if you think asking an emulation of a brain whether it feels the same would solve the problem. It would not.
The premise of the problem, as usually stated, is that the new brain is exactly the same as the old and invariably believes that it is the continuation of the same person.
In itself it doesn’t tell us anything.

This is a complete non sequitur. Science is not about trying to proclaim whether given engineering feats are possible or not, and it’s also nothing to do with what I was saying.

I was saying that there is no established model at this time to answer questions regarding personal identity. Again this should be an indisputable fact: no-one in this field would claim the issue is solved.

Again it seems you have parsed what I said as saying it would be impossible to engineer. That’s not my point at all.

That said, if you’re going to bring up the issue of engineering, yes it’s absolutely a very difficult proposition, that we may be centuries away from even if computing power eclipses brains within the next 50 years. How do you do take an accurate volumetric snapshot of a living brain? Things like fMRI are incredibly crude compared to what would be needed.

But the discussion here is not an engineering one. Most of the discussion begins with the assumption that we make a perfect facsimile of a brain.

I think some people might reject the computational model of mind because they think it means that the brain is a digital computer. This doesn’t seem to be the case; plenty of processes in the brain and body are analog, rather than digital, so those old-time philosophers mentioned by Robert Epstein who compared the brain to hydraulic systems and mechanical systems were not completely wrong. If Penrose and Hameroff are even slightly right then there are quantum phenomena involved as well. But the fact remains that the mind is absolutely, certainly and definitely a phenomenon that involves information processing, so it is a computer - information about the world goes in, behaviour comes out.

The question of whether the mind is only a computer, or if something else is going on.

But rather more happens than just that. While I’m comfortable calling the brain a machine, technically I don’t think you can refer to the whole shebang as a computer.
And of course you just said that the mind is a computer; that’s really not been established at all.

Explanation needed. Other than “I’m just not certain and we haven’t done enough experiments to eliminate every outlandish possibility…”, what theory suggests a digital emulation isn’t more than good enough to model it?

All the way back to the dawn of digital computers, this problem was worked out. Back in the 1950s, analog computers had existed for decades. Each machine would take in an analog signal, perform some math operations on it, and ultimately output a signal that was used somewhere, whether to keep an airplane flying level or to move a pen on a plotter.

A naive person might think that an analog computer meant infinite precision, and going to digital means loss of precision. (since the quantized values come in discrete binary steps). As it turns out, this is incorrect. Actual analog computers, each and every step of the process, signal out = process result + noise. Since further stages in the system each successively operate on the previous step, those noise terms cascade and your actual answers go to crap.

Similarly, your actual input signals are always real signal + noise. You need only digitize to better than the SNR, and you will get precisely the same information.

The TLDR is that while the human brain may be tied to a body, and the body is a complex analog system, in reality you need not simulate all of it, or even most of it, you merely need to duplicate the results of diffuse chemical signals that enter the brain from the body, and of nerve impulses that come in through the spine. Oh, but you don’t even need to do that, since paralyzed people live for years.

And those diffuse chemical signals contain virtually no information, they are just concentrations of various signaling molecules. So emulating them is not difficult and your emulation need not be any more accurate than the sensor resolution of the actual brain, which is low. You do not need a particle by particle simulation of the body.

You could simulate the body by a very simple machine learning algorithm, and an implanted sensor in a volunteer that can measure the signaling molecules. So you measure the signaling molecules from the brain, and the response signaling molecules from the body, and your machine learning algorithm, which is just an array of numbers that starts out random, gets updated based on the error between the predicted response from the body and the actual response from the body. Run that algorithm a year* or so to get convergence, do it on a few different volunteers, and done. I suspect you’ll find that different bodies are arbitrarily inconsistent and it doesn’t need to be very accurate.

*the body is slow to respond, taking hours to update in response, so the algorithm needs a year in order to get enough training episodes.

Firstly, it’s something of a leading question as I never said a computer wouldn’t be able to simulate a mind; we were talking about whether the mind is a computer. This is not the same thing.
I may be able to simulate planetary orbits perfectly on my computer but that doesn’t mean planets are computers.

But secondly the questions you’re asking are about strong AI, which is not the same thing as the personal identity problem. One could believe that strong AI is possible, and also that moving consciousness from one substrate to another is impossible. So as to why it is not trivially true, I can only direct you to the standard arguments against it and leave it at that.

Try billions of years. The inputs to the brain are complex, the outputs are complex (not just “signalling molecules”) and in the middle there’s a neural network of 90 billion neurons (plus all the neurons elsewhere in the body, plus glial and other cells now known to have some involvement in cognition).

Even if this was a feasible way of making a working brain, it would be somewhat tangential to this thread, as we could not guarantee it would be the same as any particular human, let alone the transfer of their mind.

Like always, you seem to have a lot of theoretical knowledge but little idea of how to even try to go about solving a problem in a practical sense.

The procedure is :

  1. Flash freeze a volumetric brain. Yes, this is tough, but it’s not hopeless, there are a few avenues that might work.

  2. ULMT the entire brain into 50 nm slices. Scan them all with arrays of electron beams.

  3. In a parallel effort, determine the numerical properties of each synapse for a person with the genetic code of the brain you are copying.

  4. Using straightforward computer vision, convert the scanned slices to a synaptome of the entire brain you are copying.

  5. With a synaptic map and the coefficients for each connection, it would be feasible to emulate it digitally. You use a building full of custom ASIC chips connected by fiber optics to do it, similar in design to chips for this purpose developed by Google, IBM, and Intel in the last few years. As densities improve, it is obvious that your ‘emulator’, which is more efficient than software emulators as it is done directly in hardware, can eventually be as small as the original brain.

Now, yes, there are chemical inputs to the brain from the body needed to even start it running and to reward successes and failure. That’s where your tiny machine learning model is needed, these inputs are not very sophisticated. In a larger effort, you also need to give the user a virtual body that has touch, spinal reflexes, limbs, vision, and hearing.

I agree with you that the functionalist view of the mind as being multiply realizable – that is, implementable on any arbitrary suitable substrate like a digital computer – is not trivially true. And certainly there are arguments against it, like the ones you’ve cited. I just want to point out that there are also good and valid arguments for it, and it may very well be true. Furthermore the supporting arguments seem to me to be more logically consistent and tend to come from theorists with cross-disciplinary research experience in pertinent fields, while the dissenting arguments often seem rather naive, like Searle’s Chinese Room which is one of the arguments in your cite.

The Chinese Room argument tries to show that a sufficiently capable mechanistic process can seem to possess understanding, but that it really doesn’t and is just symbol processing. A functionalist would immediately dismiss the whole argument by saying that if a system appears to possess understanding according to any test we can apply to it, then it does have understanding in every meaningful sense of the word.

Searle tries to prejudice us against acknowledging machine intelligence by showing us how it works – that it’s just symbol processing. But CTM theorists tell us that this is how many mental cognitive functions work, too. He argues that the room’s individual components clearly don’t understand Chinese, and notably the person locked inside and tasked with processing the symbols clearly doesn’t. But equally clearly, the system as a whole does, as manifested by its behavior. As the AI pioneer Marvin Minsky said, when you explain AI, you “explain away”, and it loses its mystique. But the ultimate determinant of understanding and intelligence is an entity’s behavior, not the mysteriousness of its inner workings.

Like always, you present trivialized sketchy outlines of alleged solutions to immensely complex problems, solutions that invariably depend on fantastical science fiction concepts in order to work. And then you declare the problem solved! Many of the things you’ve proposed here, as you’ve done elsewhere, are things that either don’t exist, may never exist, or are in primitive experimental stages. It’s like describing a V-2 rocket in 1944 and saying that this thing can propel itself in outer space, so obviously it can take us to the moon. Well, no, it’s a little more complicated than that. Just a little. Just enough to have made it perhaps the largest engineering undertaking in human history. And that one was fairly straightforward engineering compared to some of the things we’ve been talking about here.

Furthermore, the whole approach is probably wrong. We didn’t develop modern airplanes by inventing more and more intricate ways of taking birds apart, we did it by inventing completely new technologies that did similar things far more effectively. Yes, I believe we will eventually have strong AI that exceeds human intelligence across a broad spectrum of general capability, that it will exhibit consciousness, and that such a machine will eventually be able to contain the entirety of a human mind and its personal identity should we wish to pursue that goal. But this my speculative belief for which the evidence is inconclusive, and in any case we’re not going to get there by dissecting brains and trying to replicate their physical biological workings, but along entirely different technological paths.

The network will only be truly accurate if every combination of input state +internal state + output state is a match between original and new.

I don’t think a learning algorithm currently exists that guarantees we could reach that state even with an exhaustive set of states available to train and test against.

If we did achieve a perfect match, we still don’t have any data available that indicates the dup is not identical to orig with respect to consciousness and/or internal sense of self etc.

Umm. Ok. in 1944 the V-2 reached the edge of space. It was abundantly obvious a bigger one could reach the Moon. They had high altitude planes with pressurized cockpits.

You’d have to be an utter moron to declare in 1944 that lunar travel wasn’t feasible.

Sure, it cost an immense amount. In my outline above, maybe I should have given numbers but it was implied. You need a building full of custom computer chips to do it. You need thousands of slicing machines working in parallel for years to slice one complete brain sample. You need similar thousands of million dollar microscopes running fully autonomously in parallel.

And yeah, there’s many snags that would be encountered, requiring teams of engineers to fix them. They call the prospect of emulating just one human being’s brain a “moonshot” level effort.

Where did I declare the problem solved? I simply pointed out that the evidence is overwhelming that a solution is *possible *within the medium to near future, were tens to hundreds of billions invested and a team of the world’s best efforts assembled to solve it. Also the evidence is unambiguous that a solution is possible, eventually.

In the same way that we eventually did invent robots that can fly like birds, though it did take about a century after the first aircraft.

The reason we might resort to doing this instead of fully synthetic sentient AIs is simply that :

a. It’s the only feasible way known to cheat mortality and death. If the world’s hyper-rich were smart, they’d already be funding a mega-project like this…
b. The mess of neural connections in brain is staggering in complexity, but the rules obeyed by individual neurons are simple. Developing a sentient AI that duplicates this mess of connections procedurally might be beyond the capabilities of teams of humans to develop, it may be simply too complicated.

We can go a hell of a long way farther than “I feel like” the same person. I can pass tests; I know things that only I know. i.e., you can tell me a secret, and then test later to see if I still know it.

Human identity passes lots of perfectly scientific tests of that nature.