Downloading Your Consciousness Just Before Death.

It’s because they don’t have simulated animal products in the simulated vicinity of the simulated star Vega, from when they hail. It’s a simulated unfamiliarity thing.

ETA: It’s perhaps relevant to note that the unfamiliarity is only simulated in the sense that it’s real unfamiliarity felt by simulated beings. Simulated things can have real properties - a picture of a red barn is really red; it’s not some kind of inferior simulated red.

I’m sorry, but that’s just not correct. Yes, Putnam did a complete about-face on the question of functionalism as the basis of CTM, but that was a long time ago, and since then many cognitive science researchers (the late Jerry Fodor among the more prominent) have made great strides in establishing CTM as a foundational basis for understanding cognition. To be clear, Fodor never thought it was a complete explanation for all cognitive phenomena – and conflicting evidence persists about their computational basis – but he correctly thought it would become an important one.

ETA: And Putnam wasn’t really a founder of CTM, he was just a very influential early proponent. Many others carried it forward, then and now.

Yeah, Putnam has been refuted by Chalmers, and Chalmers has been refuted by Dennett, and so on; these concepts are still in their infancy, so don’t declare the computational theory dead yet.

Well, yes, of course. But it would take a more powerful mind than the one simulated to contain that imagination. Just like my phone, a relatively powerful computer, can “imagine into existence” an HP48 calculator, a much simpler computer, by running an emulator for it.

And modern PCs can run CCS64, which emulates an old Commodore 64 computer to extremely high precision - a precision which is achieved in part because it includes emulation of some of the inner workings of the physical chips. The money quote is “99.9% VIC 6566/6567/6569. All imaginable graphics modes and effect should work. The emulation of VIC is pixel exact and considers all strange effects, both known and unknown, as it emulates the inner workings of the VIC chip.” Bolding mine - if you emulate a physical system closely enough you get all side effects of it for free, even if you don’t know what they are or how they work.

If the human brain is a physical system, then it can in theory be emulated well enough to produce all its behaviors and effects, including consciousness and the mind. Which quite obviously means that the human mind is simulatable, under the physicalist model.

I believe we have some kind of life force (call it a soul if you want.)

Whether that force is mortal or immortal, we can’t download it. Maybe some day, in some future, we’ll figure out how to do it, but I’m not optimistic about that.

One thought is that the same mental state/conscious state maps to many sets of input (sensory+previous state).

First, let’s make sure I understood your post:
It’s possible to imagine a brain in the exact same physical state but due to an entirely different set of external conditions. There could be an alien on planet X where everything is purple and the wind is always blowing, but the internal brain state that maps to his current sensory inputs (and previous mental state) just so happens to be the exact same state as my brain as I type this message.

You conclude that conscious states can’t be due to the computation (state) because my typing this message must feel different than the alien on purple planet X where the wind is blowing, but are we sure they must feel different?
If a creature from just one of those environments compared how it felt t be in the two different environments, they would detect differences, but if we compared the internal state of each relative to their respective environments, it could possibly result in the same absolute state, but different relative state.

If you read the CTM page you linked, you will see that the writer uses the term computation much more broadly than you do (e.g. “and neural network computation”). I don’t think it helps the conversation to insist on a narrow definition tied to syntactic operations on symbolic representations. And as that page points out, the term symbol isn’t even well defined.

I’ve thought about this particular sci-fi notion quite a bit, and I consider it utter bunkum. There are several interrelated reasons for this, but the all come out to one result. You may be able to create an AI, quite possibly even one that is programmed to believe it is/was a human. But that basically has a null value; it doesn’t mean anything and what you have won’t behave or react as that person would have.

A human being’s conscious is embodied in the vibrant, if often frail, flesh. The human is all of that flesh, including the brain but not limited to it. Its nerves, muscles, stomach and so forth are all an integral part of the greater whole being. Sometimes, humans being humans, we sacrifices one part for the rest, but we are diminished thereby. But, ignoring that, I am deeply skeptical that the human brain can be simply replicated in an binary format. Hypothetically, an extremely powerful computational device could store all the data necessary for the human at a given point in time, though I find it questionable as to whether it could.

However, even giving all of that, you would not, in fact, have the person there. The machine, however good or accurate, is not the human being. Its existence would be completely separable from the actual human life. Whether it is a “good” or “bad” thing wouldn’t be precisely relevant here; it just wouldn’t be the same thing as the human being. It would be as if I had a real gold bar placed on a desk, and a perfect digital image of that gold bar in a computer running in Second Life or whatever. The image might be good, or it might be bad, or it might be indifferent. It is not, however, an actual gold bar. It isn’t a gold bar even if someone in the game values it exactly as much as a real gold bar. The two things are qualitatively different.

Or, to put it in another way, I see no moral or philosophical difference between that and, say, Cloning. You could clone yourself, creating a genetically identical being. Then you could, say, employ a team of psychologists, acting coaches, and educators to try and give it identical mental characteristics to yourself. However, the clone isn’t you; her or her life is qualitatively different. The clone isn’t necessarily good or bad per se, but you’re going to a lot of trouble to try and arbitrarily force it to be the same as you. But the real living creature is naturally something quite different, even though it might share the same code.

Two things here.

Firstly, there is a distinction between a machine capable of information-processing, and a computer. All computers are machines but not all machines are computers (or not only computers).
You can be a 100% Physicalist yet believe that the mind cannot be duplicated in software and/or that such a mind would not be conscious.

But secondly, and more importantly, we just don’t have a good model of what consciousness is yet. That’s the real answer to the OP.
Knowing that the mind is a property of the brain, and mental states correspond to physical states is great and all, but still leaves us a long way short of the kind of model that could answer questions like the OP’s directly.

Personally my WAG is that subjective experience will become a huge area of science someday. And an expert in “Subjective Mechanics” will laugh at how crude our understanding was, and that we could only see the two possibilities: consciousness is copied or moved.
But it’s just my personal feeling. But regardless, in the meantime the answer is we don’t know.

Neither Fodor’s semantic account not Chalmers’ counterfactuals really succeed in dispelling the issue raised by Putnam, though. Fodor, at least to my reading, was always somewhat cagey regarding precisely how it is that the symbolic vehicles manipulated in computation acquire their semantic content, but even if there is such an account, I don’t see how it could result in one computation being the ‘correct’ one to associate with a physical system, given that it’s perfectly possible to use that same system for different computations.

So the conclusion as originally posed by Putnam was too strong—not every physical system implements every finite state automaton, but if you can use a physical system to implement one computation, you can use it on the same basis to implement another. That’s fatal to a computational theory of mind; if what computation a system implements is not an objective fact about that system, then what mind a brain implements is not an objective fact about that brain. But then, who or what ‘uses’ my brain to compute my mind (and only my mind)?

Computation is nothing but using a physical system to implement a computable (partial recursive) function. That is, I have an input x, and want to know the value of some f(x) for a computable f, and use manipulations on a physical system (entering x, pushing ‘start’, say) to obtain knowledge about f(x).

This is equivalent (assuming a weak form of Church-Turing) to a definition using Turing machines, or lambda calculus, or algorithms. What’s more, we can limit us to computation over finite binary strings, since that’s all a modern computer does. In this case, it’s straightforward to show that the same physical system can be used to implement different computations (see below).

No, the argument is that a given physical system S can’t be said to exclusively implement some computation C, because while an agent A could use S to compute C, an agent B could use S to implement a different C’. Hence, my example: A uses S to implement binary addition, while another agent may use it to implement the function you get when you flip all the bit values, and yet another may interpret the value of the input and/or output bits differently, and so on.

This is a completely general conclusion. ‘Binary addition’ may be taken as a stand-in for the computation that generates a mind; thus, while one might hold that a certain device implements a mind, this is, in fact, dependent on how the system is interpreted. But if whether a device implements a mind is interpretation-dependent, then the CTM doesn’t work: either, the process of interpretation is itself computational—then, it needs to be interpreted further, leading to an infinite regress. Or, the process is not computable: then, the CTM obviously doesn’t capture everything about the mind, as it is capable of interpreting physical systems as computing, as executing operations on symbolic representations, and hence, possesses a non-computational capacity.

The CTM is very intuitively seductive: it seems to be capable of building a bridge between the physical (the computer) and the abstract (the computation), and it’s a fair bet that some such bridge is needed to explain the mind. The mistake, however, is to assume that the way this bridge is built is in any way easier to explain than how it’s built for the mind; indeed, the only way that holds up to simple examples of associating distinct computations with one and the same physical system (even concurrently) is to involve mind, and more accurately, interpretation. Thus, things turn out the other way around: mind is needed to explain how physical systems connect to the abstract computations; but then, computation can’t be what underlies mind.

Hmm. Maybe I’m not picturing it correctly, but that sounds the same.

In your example it’s an electronic circuit whose states simultaneously support multiple different function results.

In my example it’s two brains whose states simultaneously support multiple (apparently) different conscious states.

What am I misunderstanding?

I agree with this entirely. ‘Subjective Mechanics’ or ‘Sentience Wrangling’ is likely to become a major field of study in the centuries to come, and will produce results we can barely imagine. There will be ‘wibble’ in our cars, airplanes and spacecraft, not to mention our smartphones or two-way wrist radios, or whatever. And ‘wibble’ will come in a myriad of types and flavours.

But even if this all comes to pass, I still wouldn’t guarantee that the uploading of consciousness will every be viable or desirable.

I agree with this.
No reason to assume at this point that minds can be transferred to another substrate and what exactly that might entail.

I can’t tell whether this part is sarcasm. But I have not used ‘wibble’ nor suggested that consciousness is some kind of app.

I come from a neuroscience background, and all I am saying is that I think that at some point we will have a descriptive model of consciousness sufficient to unambiguously answer questions about what subjective experience is and how it arises.
And my gut feeling is that this model will require some kind of conceptual jump; that the problem will only become tractable when we frame it in a new way. You can absolutely disagree with this feeling; it’s not based on anything other than the observation that questions on consciousness seem like very different questions to the kind that science has so far managed to tackle well.
But no, I’m not positing a soul, or magic.

Two points.

  1. Fodor was hardly being “cagey”. That symbolic operands possess semantic qualities is the very essence of what computation is. The semantic attributes of symbolic representations are endowed by the very processes that manipulate them. For example, a computer doing image processing endows visual semantics to generic symbols that are otherwise just meaningless ordinary bits and bytes. This is neither mysterious nor magical.

  2. The fact that “if you can use a physical system to implement one computation, you can use it on the same basis to implement another” is in no way “fatal” to the computational theory of mind. In fact, it’s intrinsic to it. It’s closely related to the central CTM principle of multiple realizability; in the same way that a computational system can implement multiple kinds of computation, the computations in one such physical system can be identically realized in another.

Somewhat related to #2 is the silicon chip replacement thought experiment, sort of a cognitive-science version of the Ship of Theseus identity problem. We should in theory be able to replace an individual neuron in a human brain with a silicon microchip that replicates all its functions. If the prosthetic works as intended, the individual would experience no change in perception or consciousness. Now continue the process until more and more neurons are replaced with microchips, until the entire brain is comprised solely of silicon microchips. At what point, if any, does it stop being an actual brain? At what point would the individual perceive any difference?

It may be that I’m misunderstanding you, but the situation you described was one in which the same physical state (of, presumably, identical brains, otherwise one could hardly speak of the same physical state) is produced by different causal factors, presumably leading to the same mental states—which doesn’t seem plausible, to me: if the alien’s brain is the same kind as ours, it should also react to the same kind of causal influences in the same way, and thus, in an envrionment in which it is subject to stimuli that would put our brain into a state of perceiving purple things and howling winds, should likewise be in a state of perceiving purple things and howling winds.

Or do you mean that the alien’s sensory apparatus is such that the signals it sends are traduced such as to be equivalent, in the case of being subject to purple-stuff-and-howling-wind stimuli, to the signals our senses send in the case of being subject to composing-posts-on-message-boards stimuli?

That, I’d say, is a different question, roughly analogous to the case of a brain in a vat.

What I mean is, rather, a system that’s in the same physical state, while supporting different semantic interpretations. That is, a light that’s on being both interpretable as signaling ‘1’ and ‘0’, and thus, yielding to different computations being implemented depending on how it is, in fact, interpreted.

The interpretation being a key component here: nobody interprets brains as implementing minds (the notion would lead to circularity).

It’s at least mysterious in so far as nobody knows how physical systems can come to represent even bits and bytes, much less visual systems. This is something usually glossed over by proponents of symbolic approaches to computation, but it’s in fact the key question.

That a computer does image processing is not a fact about the computer, i. e. the physical system, but rather, about how its symbolic vehicles are implemented. That’s shown by the fact that you can interpret them differently—if you were to claim, for instance, that the system I’ve proposed ‘endows arithmetic semantics to generic symbols’ by implementing binary addition, I can point you to an interpretation that’s as justified as yours, and yet, doesn’t have anything to do with addition.

It’s the opposite of multiple realizability, in fact (related to Newman’s objection to Russell’s causal theory of perception). The problem isn’t that there’s no unique computation that can be associated to a system, but rather, that associating any computation whatever to a physical system requires an act of interpretation and thus, the exercise of a mental capacity. Thus, the attempt to explain mind in terms of computation simply collapses in on itself, as one has to appeal to mind to explain computation, first.

I have no objections to a silicon brain being conscious. This isn’t in tension with the fact that consciousness isn’t computational.

At least two people have posted in this thread (posts 46 and 69) arguing that the conscious mind is not the whole of the self, on a purely physical basis with no need for any reference to a “magical soul”.

That should’ve been ‘interpreted’.

You are understanding what I was thinking. It seems like the only difference between your circuit example and my brain example is one of scale.

The detection of light signals may be different at step 1, but the signal forwarded from that step loses any connection to the color (as you note in your point, the interpretation is relative). Thus, it could be that many different environments result in the same internal set of signals (beyond the initial detection).

I’m not interested in all the other interpretations, thank you very much; just the one that makes the pixels light up on my screen. Or are you suggesting that my laptop is conscious? It certainly doesn’t rely on my consciousness to ‘interpret’ which computation to choose - the design does that.