Downloading Your Consciousness Just Before Death.

The computation is how the pixels on your screen are interpreted. Saying that an interpretation makes them light up is nonsensical. They’re the lights on the device I proposed, which, if they’re interpreted differently, lead to different computations being performed.

I don’t think I get it, I’m afraid. In my example, different interpreters will be in different brain states, as they will have different beliefs—say, one believing that ‘light on means 1’, and another, ‘light on means 0’. So I don’t see the connection.

But the same lights still light up, so the other interpretations are irrelevant.
Putnam said ‘every ordinary open system realizes every abstract finite automaton’. I pointed out that this is irrelevant in a laptop computer, because only one set of symbols light up. By extension it is irrelevant inside your head too.

It’s not a matter of which lights light up, but of what those lights are interpreted as. If you interpret a light being lit as meaning 1 vs. meaning 0, what’s being computed will differ.

This is true - the human brain/mind interface is remarkably unreliable - that’s why people see so many UFOs.
Dennett’s interpretation seems to describe this best- a leaky sieve that somehow produces works of genius.

No, proponents of CTM aren’t “glossing over” anything. The computational proposition is solely about whether or not cognitive processes are essentially symbolic and hence subject to multiple realizability, say, on digital computers. For example, the essence of the debate about how the mind processes mental images is whether it’s symbolic-representational in just this way, or whether it’s something that must involve the visual cortex – say, producing some kind of analog signal that is then reprocessed through the visual cortex. There is lots of experimental evidence for the symbolic-representational interpretation which supports the computational model, primarily based on very significant empirically observed fundamental differences between mental images and visual ones.

FTR, there are also papers reporting conflicting results that have led to a continuing debate. Thus there are researchers who argue against the symbolic-representational model of mental image processing, Stephen Kosslyn being among the more prominent. My fair and balanced position on this matter is that these individuals are morons. :smiley:

This seems to me rather incoherent, but perhaps I’m not understanding it. It sounds a lot like the homunculus fallacy.

No, the whole intent of the silicon chip replacement thought experiment is that the brain ultimately becomes comprised of nothing but computational components. If your argument is that something more profound has happened, well, I would agree that something very profound has happened that is not present in the individual computational modules, but that “something” is called “emergent properties of complexity”.

Which cognitive processes?

What is a symbol?

(note: your linked page reminds us that the terms computation and symbol are not well defined).

A “symbol” is a token – an abstract unit of information – that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. The relationship to meaningful things – the semantics – is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program.

“Which cognitive processes?” Probably many, but perhaps not all. Mental image processing is a frequently cited basis of discussion. Here the distinction is whether we remember images in the visual manner of a Polaroid photograph, where it has to be processed through the visual cortex, or whether we render them symbolically, like a JPEG file, and subsequently process them via what Fodor has called “the language of thought”.

This excerpt from Fodor’s The Mind Doesn’t Work That Way might be of interest:
The cognitive science that started fifty years or so ago more or less explicitly had as its defining project to examine the theory—largely owing to Turing—that cognitive mental processes are operations defined on syntactically structured mental representations that are much like sentences. The proposal was to use the hypothesis that mental representations are language-like to explain certain pervasive and characteristic properties of cognitive states and processes; for example, that the former are productive and systematic, and that the latter are, by and large, truth preserving. Roughly, the systematicity and productivity of thought were supposed to trace back to the compositionality of mental representations, which in turn depends on the constituent structure of their syntax. The tendency of mental processes to preserve truth was to be explained by the hypothesis that they are computations, where, by stipulation a computation is a causal process that is syntactically driven.

I think that the attempt to explain the productivity and systematicity of mental states by appealing to the compositionality of mental representations has been something like an unmitigated success …

So what? For a given brain with a given physical state, the way that the lights are interpreted is fixed - it’s determined based on the cognitive and physical state of the brain and how all the dominoes in there are hitting each other. The fact that a different brain or the same brain in a different cognitive state might interpret things differently doesn’t in the slightest imply that computation can’t take place - and it doesn’t imply that the computation/cognition/whateveryoucallit can’t be copied or duplicated.

Seriously, I’m a computer programmer and calculations are context-sensitive all the time. When you click your mouse button that’s the same action, the same event, but the way the computer reacts to that event varies wildly depending on the computer’s state - where it thinks the mouse is, what programs it’s running, how its internal state maps those program’s ‘windows’ to the clickable area, how the programs choose to react to mouse clicks. It’s all wildly variable and all entirely programmable.

Because it’s the theory of mind (which is based on the brain), let’s try to make it more concrete, are these symbols like a bit in a computer?

A neuron firing once?
The rate a neuron fires over some time period?
The modification of synaptic activity by glial cells?

If so, is every component of the brain a symbol?

I think you’re missing the point here. Cognitive science strives to provide a functional – or, in computer science terms, an “architectural” – rather than a neurophysiological account of cognition. The underlying biological minutiae are obviously important in many respects, but completely irrelevant at this level.

How would you know if the theory is correct or has any value if you don’t connect it to reality?

How can you confirm whether the brain uses symbolic processing for a specific function if you can’t map the elements of the theory to the brain?

Multiple realizability has nothing to do with symbols, but with functional properties (or states, or events). The functional property of ‘being a pump’ is multiply realizable—say, in a mechanical device, or a heart. The important notion is that the behavior, the ways the states of a system connect, must be invariant between different realizations.

That you can use computation to model aspects of the brain’s behavior doesn’t entail that what the brain does is computation any more than that you can tell a story about how a brain does what it does entails that the brain’s function is story-telling. It’s a confusion of the map for the territory, like saying that because we can draw maps of some terrain, the terrain itself must just be some huge, extremely detailed map. But it’s not: we can merely use one as a model of the other.

That’s not a bad intuition. It exposes a similar problem for the computationalist view as the homunculus exposes for naive representationalist theories of vision—namely, a vicious regress, where you’d have to complete an infinite tower of interpretational agencies in order to fix what the system at the ‘base level’ computes.

I agree that that’s the point the thought experiment seeks to build intuition for, it’s just that it fails: you don’t replace the neurons with computations, you replace them with machines. Again, think about the map/territory analogy: say you have two territories described by the same map, then you can replace bits of one by bits of the other, and still have an isomorphic map describing the resulting mashup; but that doesn’t tell you that therefore, maps tell you all there’s to know about the territory.

Ah yes, here comes the usual gambit: we can’t actually tell what happens, but we’re pretty sure that if you smoosh just enough of it together, consciousness will just spark up somehow.

Wouldn’t it be nice if that were actually possible! But of course, syntax necessarily underdetermines semantics (and radically so). All that syntax gives us is a set of relations between symbols—rules of replacing them, and so on. But (as pointed out by Newman) a set of relations can’t fix anything about the things standing in those relations other than how many there (minimally) need to be.

However, you still haven’t really engaged with the argument I made. I’ll give a more fully worked out version below, and I’d appreciate it if you could tell me what you consider to be wrong with it. It’s somewhat disconcerting to have proposed an argument for a position, and then, for nearly sixty posts, get told how wrong you are without anybody even bothering to consider the argument.

Well, that’s not quite what I claimed (but I try to make the argument more clearly below). However, you’re already conceding the most important element of my stance—that you need an external agency to fix what a system computes. Let’s investigate where that leads.

Either, the way the external agency fixes the computation is itself computational (say, taking as input a state of a physical system, and producing as output some abstract object corresponding to a computational state), or it’s not. In the latter case, computationalism is patently false, so we can ignore that.

So suppose that a computation is performed in order to decide what the original system computes. Call that computation M. But then, as we had surmised, computations rely on some further agency fixing them to be definite. So, in order to ensure that (say) a brain computes M, which ensures that the original object computes whatever the owner of the brain considers it to compute, there must be some agency itself fixing that the brain computes M. Again, it can do so computationally, or not. Again, only the first case is of interest.

So suppose the further agency performs some computation M’ in oder to fix the brain’s computing of M. But then, we need some further agency to fix that it does, in fact, compute M’. And, I hope, you now see the issue: if a computation depends on external facts to be fixed, these facts either have to be non-computational themselves, or we are led to an infinite regression. In either case, computationalism is false.

But I think there’s still some confusion about the original argument I made (if there weren’t, you’d think somebody of those convinced it’s false would have pointed out its flaws in the sixty posts since).

So suppose you have a device, D, consisting of a box that has, on its front, four switches in a square array, and three lights. Picture it like this:



 -----------------------------
|                             |
|  (S11)(S12)                 |
|                (L1)(L2)(L3) |
|  (S21)(S22)                 |
|                             |
 -----------------------------


Here, S[sub]11[/sub] - S[sub]22[/sub] are the four switches, and L[sub]1[/sub]-L[sub]3[/sub] are the three lights.

The switches can either be in the state ‘up’ or ‘down’, and the lights either be ‘on’ or ‘off’. If you flip the switches, the lights change.

How do you figure out what the system computes? Well, you’d have to make a guess: say, you guess that ‘up’ means ‘1’, ‘down’ means ‘0’, ‘on’ means ‘1’, and ‘off’ means ‘0’. Furthermore, you suppose that each of the rows of switches, as well as the row of lights, represents a binary number (S[sub]11[/sub] being the 2[sup]1[/sup], and S[sub]12[/sub] the 2[sup]0[/sup]-valued bit, and analogous for the others). Call the number represented by (S[sub]11[/sub], S[sub]12[/sub]) x[sub]1[/sub], and the number represented by (S[sub]21[/sub], S[sub]22[/sub]) x[sub]2[/sub]. You then set out to discover what function f(x[sup]1[/sup], x[sup]2[/sup]) is implemented by your device. So, you note down the behavior:



x1   x2   |   f(x1, x2)
-----------------------
0    0    |       0
0    1    |       1
0    2    |       2
0    3    |       3
0    0    |       0
1    1    |       2
1    2    |       3
1    3    |       4
2    0    |       2
2    1    |       3
2    2    |       4
2    3    |       5
3    0    |       3
3    1    |       4
3    2    |       5
3    3    |       6


Thus, you conclude that the system performs binary addition. You’re justified in that, of course: if you didn’t know what, say, the sum of 2 and 3 is, you could use the device to find out. This is exactly how we use computers to compute anything.

But of course, your interpretation is quite arbitrary. So now I tell you, no, you got it wrong: what it actually computes is the following:



x1   x2   |  f'(x1, x2)
-----------------------
0    0    |       0
0    2    |       4
0    1    |       2
0    3    |       6
2    0    |       4
2    2    |       2
2    1    |       6
2    3    |       1
1    0    |       2
1    2    |       6
1    1    |       1
1    3    |       5
3    0    |       6
3    2    |       1
3    1    |       5
3    3    |       3


Now, how on Earth do I reach that conclusion? Well, simple: I kept up the identification of ‘up’ and ‘on’ to mean ‘1’ (and so on), but simply took the rightmost bit to represent the highest value (i. e. (L3) not represents 2[sup]2[/sup], and likewise for the others). So, for instance, the switch state
(S[sub]11[/sub] = ‘up’, S[sub]12[/sub] = ‘down’) is interpreted as (1, 0), which however represents 12[sup]0[/sup] + 02[sup]1[/sup] = 1, instead of 12[sup]1[/sup] + 02[sup]0[/sup] = 2.

I haven’t changed anything about the device; merely how it’s interpreted. That’s sufficient: I can use the system to compute f’(x[sup]1[/sup], x[sup]2[/sup]) just as well as you can use it to compute f(x[sup]1[/sup], x[sup]2[/sup]).

This is a completely general conclusion: I can introduce changes of interpretation for any computational system you claim computes some function f to use it to compute a different f’ in just the same manner.

Consequently, what a system computes isn’t inherent to the system, but is only fixed upon interpreting it—taking certain of its physical states to have symbolic value, and fixing what the symbols mean.

If, thus, mind is due to computation, brains would have to be interpreted in the right way to produce minds. What mind a brain implements, and whether it implements a mind at all, is then not decided by the properties of the brain alone; it would have to be a relational property, not an intrinsic one.

That’s a bullet I can see somebody bite, but it gets worse from there: for if how we fix the computation of the device D is itself computational, say, realized by some computation M, then our brains would have to be interpreted in the right way to compute M. But then, we are already knee deep in a vicious regress that never bottoms out.

Consequently, the only non-incoherent way to fix computations is via non-computational means. (I should point out that I don’t mean hypercomputation or the like, here: the same argument can be applied in such a case.) But then, the computational theory is right out of the window.

Here’s a page from the Stanford Encyclopedia about this problem.
https://plato.stanford.edu/entries/computation-physicalsystems/
Fascinating stuff, but as you can see, the debate has moved on a long way from Putnam’s ideas. Certain computations are ontologically privileged, and they are the only ones we should be interested in.

And there is more.

If I have two laptops, and they both show the same symbols on the screen as a result of pressing the same keys on the keyboard, I don’t care what route the computation has taken, so long as the answers are consistent. Perhaps some of the more complex routes to the end result cause more waste heat to be emitted by the processor. But if the process can be made more efficient, that is a benefit.
Similarly an analog of a human mind on a computer might use a simpler method of computation than the biological substrate, but if the end result (the symbols on the laptop screen) are the same, then the mind can be said to be successfully modelled. The important factor is the end result of the computation, not the computation itself. If the simulation behaves in the same way as the original, then the simulation is successful.

Ah, you might say- only another human could determine whether the simulation was accurate, or realistic; I don’t think that is the case. A sufficiently well-programmed computer could observe the behaviours (the outputs) of the simulated mind, and compare those behaviours with the behaviours of the original. If the computerised monitoring system were sufficiently well programmed it could detect differences in the simulation’s behaviour much better than a human could. Researchers are already developing programs that do this sort of thing, to detect criminal and terrorist behaviour, for instance.

In short- you don’t need to have a human consciousness to observe the results of a computation to discriminate between a good simulation and a bad one, so I do not see the necessity for a non-computational element at any stage in the process. Given a few thousand years of technological development, we could all exist as programs monitoring each other’s behaviour for signs of inconsistency. Another reason not to opt for downloading/uploading.

The SEP is a good first resource to get an overview about an unfamiliar topic. If you want to dive a little deeper into the matter, I’d suggest reading the review article by Godfrey-Smith about ‘Triviality arguments against functionalism’, which considers Putnam’s early attack and more modern developments.

The entire section 3 of the SEP-article, by the way, is occupied with worries such as the one I’m presenting, so it’s very much a current issue.

OK, so what makes a computation ontologically privileged? And which of the two I presented above is the right one? Or is it any of the many others that can be obtained in a similar manner?

That’s not the issue at all, though. The two laptops will show exactly the same symbols; the question is how these symbols are interpreted. Only there does what has been computed get fixed.

Really, you should try to work through the example I gave above, that will make it more clear.

Although I believe I understand your example, that the same system can be said to compute multiple functions, I’m not certain about the conclusion.

Thoughts:
Your box example (and my brain example) are at their cores just input to output mappings. We want to attach names to the set of mappings (e.g. binary addition) which introduces the issue of an external agent being required to choose the specific name for what is being computed.

This is where I’m not sure about your conclusion. I believe your conclusion is that consciousness can’t be said to be created by a specific computation because that very computation is also the computation used for function XYZ, just like you box computes multiple functions simultaneously because they share a mapping of input to output (if you map your problems input to the input and output to the output correctly).

I believe this is the same argument you mentioned one other time that if we map inputs and outputs properly then a rock can perform any computation. But in reality the computation just got pushed to the input and output mapping.
So, in summary:
1 - The mapping of function input to the machinery’s input and then machinery’s output to function output has computation embedded in the mappings to input and output that are external to your machine. You would need to consider the entire system. Or if there were no mappings required, then we just happen to have given multiple names to the same function.

2 - You state that the interpretation requires an external agent, but that is only to provide the additional computations embedded in the mappings into and out of your machine system. If we consider the entire system/function, is there still a need for an external agent? Isn’t the external agent just giving a name to the function or validating that it’s working?

3 - Even if we consider the entire system, there are still common computations that can serve many different purposes. In a beetle there could be function X that takes 8 inputs and spits out 3 outputs that servers some larger process, and in a fish that exact same mapping could be applied in a different area of the brain serving a different larger purpose. Is it really a problem if the same conscious state can arise in many different environments (this is my alien purple world example). The beetle and the fish share some mappings, why is consciousness so special that the mappings can’t be shared in different environments?

Point #1:
There has been a lot of research on this topic in the last 20 years. If you are truly interested in understanding how the brain handles mental imagery, you should read this research.

The evidence is pretty clear that the visual processing areas are activated for mental imagery in the same way that sensory images activate those areas. The stronger the mental image (as measured by tasks) the stronger the activation. In addition, tasks related to processing mental imagery have shown that the content of novel constructed images is used for higher level processing.

If you understand what the neuroscientists are finding out about the brain in general, it does make sense: the brain seems to process sensory input forward while also sending signals about expected future state backwards. It makes sense (not proven but seem logical) that the mental imagery function would piggy back on that same mechanism, thus efficiently making use of machinery that is already in place.

This is not to say that other forms of processing aren’t also in use during mental imagery tasks, but your statement (and previous statements on this topic) don’t reflect current research from many different researchers.

Point #2:
If you can’t state how a symbol is represented in the brain and how it maps to neural processing, how can you state with such 100% certainty (e.g. people that disagree with your position are morons) that imagery is being processed symbolically?

Well, no, that conclusion is true only if you assume the need for the aforementioned agency, or interpreter, as a prerequisite for computation, a notion that I rejected from the beginning – a notion that, if true, would undermine pretty much the whole of CTM and most of modern cognitive science along with it.

I read your example but I don’t see it as supporting that notion in any way, let alone being a “completely general” conclusion. The problem with your example is that it’s a kind of sleight-of-hand where you sneakily change the implied definition of the “computation” that the box is supposed to be performing. The box has only switches and lights. It knows nothing about numbers. So the “computation” it’s doing is accurately described either by your first account (binary addition) or the second one, or any other that is consistent with the same switch and light patterns. It makes no difference. They are all exactly equivalent. The fact remains that the fundamental thing that the box is doing doesn’t require an observer to interpret, and neither does any computational system. The difference with real computational systems, including the brain, is that there is a very rich set of semantics associated with their inputs and outputs which makes it essentially impossible to play the little game of switcheroo that you were engaging in.

FTR, I don’t claim to have solved the problem of consciousness. However, as you well know, emergent properties are a real thing, and if one is hesitant to say “that’s why we have consciousness”, we can at least say that emergent properties are a very good candidate explanation of attributes like that which appear to exist on a continuum in different intelligent species to an extent that empirically appears related to the level intelligence. They are a particularly good candidate in view of the fact that there is not even remotely any other plausible explanation, other than “mystical soul” or “magic”.

I have, and that’s why I hold the position I do. But thanks for the suggestion.

Because that level of biological minutiae is irrelevant to a functional description of cognition, as I already said. I can be 100% certain of what my computer will do when I type a particular command, without having any understanding of how its logic gates are implemented.

BTW, the “moron” comment was intended to be tongue-in-cheek. I thought it was obvious.

Understood, but it still indicates the level of certainty you have in your position, that was my point, not concern about the word itself.

Then how did you link your functional description of cognition to what is NOT going on inside the biology previously (not visual area, no analog signals)? You seem to want to make statements sometimes about what is going on in the biology, and then at other times you want to state that it’s irrelevant.

Help me understand your position, when is the biology relevant for understanding human cognition and when is it not relevant?

Isn’t it putting the cart before the horse to decide in advance that the machinery is doing X without understanding how the machine works?

How can CTM ever make a prediction about the system if you can’t ground it in reality?

What predictions does CTM make that can be used to determine if mental imagery is symbolic or not?

Again, how can you have a theory that doesn’t even have a concrete understanding of what is a symbol and what isn’t a symbol?