Quote:
Originally Posted by wolfpup
No, proponents of CTM aren't "glossing over" anything. The computational proposition is solely about whether or not cognitive processes are essentially symbolic and hence subject to multiple realizability, say, on digital computers.

Multiple realizability has nothing to do with symbols, but with functional properties (or states, or events). The functional property of 'being a pump' is multiply realizablesay, in a mechanical device, or a heart. The important notion is that the behavior, the ways the states of a system connect, must be invariant between different realizations.
Quote:
There is lots of experimental evidence for the symbolicrepresentational interpretation which supports the computational model, primarily based on very significant empirically observed fundamental differences between mental images and visual ones.

That you can use computation to model aspects of the brain's behavior doesn't entail that what the brain does is computation any more than that you can tell a story about how a brain does what it does entails that the brain's function is storytelling. It's a confusion of the map for the territory, like saying that because we can draw maps of some terrain, the terrain itself must just be some huge, extremely detailed map. But it's not: we can merely use one as a model of the other.
Quote:
This seems to me rather incoherent, but perhaps I'm not understanding it. It sounds a lot like the homunculus fallacy.

That's not a bad intuition. It exposes a similar problem for the computationalist view as the homunculus exposes for naive representationalist theories of visionnamely, a vicious regress, where you'd have to complete an infinite tower of interpretational agencies in order to fix what the system at the 'base level' computes.
Quote:
No, the whole intent of the silicon chip replacement thought experiment is that the brain ultimately becomes comprised of nothing but computational components.

I agree that that's the point the thought experiment seeks to build intuition for, it's just that it fails: you don't replace the neurons with computations, you replace them with machines. Again, think about the map/territory analogy: say you have two territories described by the same map, then you can replace bits of one by bits of the other, and still have an isomorphic map describing the resulting mashup; but that doesn't tell you that therefore, maps tell you all there's to know about the territory.
Quote:
If your argument is that something more profound has happened, well, I would agree that something very profound has happened that is not present in the individual computational modules, but that "something" is called "emergent properties of complexity".

Ah yes, here comes the usual gambit: we can't actually tell what happens, but we're pretty sure that if you smoosh just enough of it together, consciousness will just spark up somehow.
Quote:
Originally Posted by wolfpup
A "symbol" is a token  an abstract unit of information  that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. The relationship to meaningful things  the semantics  is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program.

Wouldn't it be nice if that were actually possible! But of course, syntax necessarily underdetermines semantics (and radically so). All that syntax gives us is a set of relations between symbolsrules of replacing them, and so on. But (as pointed out by Newman) a set of relations can't fix anything about the things standing in those relations other than how many there (minimally) need to be.
However, you still haven't really engaged with the argument I made. I'll give a more fully worked out version below, and I'd appreciate it if you could tell me what you consider to be wrong with it. It's somewhat disconcerting to have proposed an argument for a position, and then, for nearly sixty posts, get told how wrong you are without anybody even bothering to consider the argument.
Quote:
Originally Posted by begbert2
So what? For a given brain with a given physical state, the way that the lights are interpreted is fixed  it's determined based on the cognitive and physical state of the brain and how all the dominoes in there are hitting each other. The fact that a different brain or the same brain in a different cognitive state might interpret things differently doesn't in the slightest imply that computation can't take place  and it doesn't imply that the computation/cognition/whateveryoucallit can't be copied or duplicated.

Well, that's not quite what I claimed (but I try to make the argument more clearly below). However, you're already conceding the most important element of my stancethat you need an external agency to fix what a system computes. Let's investigate where that leads.
Either, the way the external agency fixes the computation is itself computational (say, taking as input a state of a physical system, and producing as output some abstract object corresponding to a computational state), or it's not. In the latter case, computationalism is patently false, so we can ignore that.
So suppose that a computation is performed in order to decide what the original system computes. Call that computation M. But then, as we had surmised, computations rely on some further agency fixing them to be definite. So, in order to ensure that (say) a brain computes M, which ensures that the original object computes whatever the owner of the brain considers it to compute, there must be some agency itself fixing that the brain computes M. Again, it can do so computationally, or not. Again, only the first case is of interest.
So suppose the further agency performs some computation M' in oder to fix the brain's computing of M. But then, we need some further agency to fix that it does, in fact, compute M'. And, I hope, you now see the issue: if a computation depends on external facts to be fixed, these facts either have to be noncomputational themselves, or we are led to an infinite regression. In either case, computationalism is false.
But I think there's still some confusion about the original argument I made (if there weren't, you'd think somebody of those convinced it's false would have pointed out its flaws in the sixty posts since).
So suppose you have a device, D, consisting of a box that has, on its front, four switches in a square array, and three lights. Picture it like this:
Code:

 
 (S11)(S12) 
 (L1)(L2)(L3) 
 (S21)(S22) 
 

Here, S_{11}  S_{22} are the four switches, and L_{1}L_{3} are the three lights.
The switches can either be in the state 'up' or 'down', and the lights either be 'on' or 'off'. If you flip the switches, the lights change.
How do you figure out what the system computes? Well, you'd have to make a guess: say, you guess that 'up' means '1', 'down' means '0', 'on' means '1', and 'off' means '0'. Furthermore, you suppose that each of the rows of switches, as well as the row of lights, represents a binary number (S_{11} being the 2^{1}, and S_{12} the 2^{0}valued bit, and analogous for the others). Call the number represented by (S_{11}, S_{12}) x_{1}, and the number represented by (S_{21}, S_{22}) x_{2}. You then set out to discover what function f(x^{1}, x^{2}) is implemented by your device. So, you note down the behavior:
Code:
x1 x2  f(x1, x2)

0 0  0
0 1  1
0 2  2
0 3  3
0 0  0
1 1  2
1 2  3
1 3  4
2 0  2
2 1  3
2 2  4
2 3  5
3 0  3
3 1  4
3 2  5
3 3  6
Thus, you conclude that the system performs binary addition. You're justified in that, of course: if you didn't know what, say, the sum of 2 and 3 is, you could use the device to find out. This is exactly how we use computers to compute anything.
But of course, your interpretation is quite arbitrary. So now I tell you, no, you got it wrong: what it actually computes is the following:
Code:
x1 x2  f'(x1, x2)

0 0  0
0 2  4
0 1  2
0 3  6
2 0  4
2 2  2
2 1  6
2 3  1
1 0  2
1 2  6
1 1  1
1 3  5
3 0  6
3 2  1
3 1  5
3 3  3
Now, how on Earth do I reach that conclusion? Well, simple: I kept up the identification of 'up' and 'on' to mean '1' (and so on), but simply took the rightmost bit to represent the highest value (i. e. (L3) not represents 2^{2}, and likewise for the others). So, for instance, the switch state
(S_{11} = 'up', S_{12} = 'down') is interpreted as (1, 0), which however represents 1*2^{0} + 0*2^{1} = 1, instead of 1*2^{1} + 0*2^{0} = 2.
I haven't changed anything about the device; merely how it's interpreted. That's sufficient: I can use the system to compute f'(x^{1}, x^{2}) just as well as you can use it to compute f(x^{1}, x^{2}).
This is a completely general conclusion: I can introduce changes of interpretation for any computational system you claim computes some function f to use it to compute a different f' in just the same manner.
Consequently, what a system computes isn't inherent to the system, but is only fixed upon interpreting ittaking certain of its physical states to have symbolic value, and fixing what the symbols mean.
If, thus, mind is due to computation, brains would have to be interpreted in the right way to produce minds. What mind a brain implements, and whether it implements a mind at all, is then not decided by the properties of the brain alone; it would have to be a relational property, not an intrinsic one.
That's a bullet I can see somebody bite, but it gets worse from there: for if how we fix the computation of the device D is itself computational, say, realized by some computation M, then our brains would have to be interpreted in the right way to compute M. But then, we are already knee deep in a vicious regress that never bottoms out.
Consequently, the only nonincoherent way to fix computations is via noncomputational means. (I should point out that I don't mean hypercomputation or the like, here: the same argument can be applied in such a case.) But then, the computational theory is right out of the window.
93
