View Single Post
Old 05-23-2019, 02:01 PM
wolfpup's Avatar
wolfpup is online now
Join Date: Jan 2014
Posts: 11,099
Originally Posted by Half Man Half Wit View Post
The computational theory of mind is the statement that computation is all the brain does, and, in particular, that consciousness is computational. This, I indeed have shown to be in error.
This is flat-out wrong, as evidenced by Fodor's statements that CTM is an indispensably essential theory explaining many aspects of cognition, while at the same time he never even imagined that anyone would take it to be a complete description of everything the mind does. Your characterization of the computational theory of mind is simply wrong in that it fundamentally misrepresents how CTM has been defined and applied in cognitive science. And CTM doesn't even attempt to address consciousness, regarding it as an ill-defined problem. I've provided my own speculations about it, and those you're free to disagree with, but when you make arguments that mischaracterize what CTM means in cognitive science you can expect to be corrected.

Originally Posted by Half Man Half Wit View Post
That does in no way imply that no process that goes on in the brain is computational. I've been careful (to no avail, it seems) to point out that my argument threatens solely the interpretational abilities of minds: they can't be computational. Using these interpretational powers, it becomes possible to assign definite computations to systems---after all, I use the device from my example to compute, say, sums.

Furthermore, even systems that aren't computational themselves may be amenable to computational modeling---just as well as systems that aren't made of springs and gears may be modeled by systems that are, like an orrery, but I suspect where these words are, you just see a blank space.

I hold consistently to the same position I did from the beginning: computational modeling of the brain is useful and tells us much about it, but the mind is not itself computational. I have been very clear about this. Take my very first post in this thread:

There, I clearly state that whatever realizes the mind's interpretational capacity can't be computational, and thus, minds can't be computational on the whole. That doesn't entail that nothing about minds can be computational. That would be silly: I have just now computed that 1 + 4 equals 5, for instance.

Also, I have been clear that my arguments don't invalidate the utility of computational modeling:
(emphasis mine)
I see a "blank space" where you provide your ruminations about CTM being somehow related to "computational modeling" because it's so egregiously wrong. Please note the following commentary from the Stanford Encyclopedia of Philosophy. They refer to the family of views I'm talking about here as classical CTM, or CCTM, to distinguish them from things like connectionist descriptions. CCTM is precisely what Putnam initially proposed and was then further developed into a mainstream theory at the forefront of cognitive science by Fodor (bolding mine):
According to CCTM, the mind is a computational system similar in important respects to a Turing machine ... CCTM is not intended metaphorically. CCTM does not simply hold that the mind is like a computing system. CCTM holds that the mind literally is a computing system.
It then goes on to describe Fodor's particular variant of CCTM:
Fodor (1975, 1981, 1987, 1990, 1994, 2008) advocates a version of CCTM that accommodates systematicity and productivity much more satisfactorily [than Putnam's original formulation]. He shifts attention to the symbols manipulated during Turing-style computation.
This is of course exactly correct. The prevalent view of CTM that was first advanced by Fodor and then became mainstream is that many cognitive processes consist of syntactic operations on symbols in just the manner of a Turing machine or a digital computer, and he further advanced the idea that these operations are a kind of "language of thought", sometimes called "mentalese". The proposition is that there is a literal correspondence with the operation of a computer program, and it has no relationship to your suggestions of "modeling" or of doing arithmetic in your head.

Originally Posted by Half Man Half Wit View Post
Then why take issue with my claim of demonstrating a non-computational ability of the mind?
Because it's wrong, for the reason cited above.

Originally Posted by Half Man Half Wit View Post
As such, the question is underdetermined: there are infinitely many computations that take 0110011 to 0100010. This isn't a computation, it's rather an execution trace.

But of course, I know what you mean to argue. So let's specify a computation in full: say, the Turing machine has an input set consisting of all seven bit strings, and, to provide an output, traverses them right to left, replacing each block '11' it encounters with '10'. Thus, if produces '0100010' from '0110011', or '1000000' from '1111111', or '0001000' from '0001100'.

This is indeed a fully formally specified, completely determinate computation. You'll note it's of exactly the same kind of thing as my functions f and f'. So why does a Turing machine execute a definite computation?

Simple: a Turing machine is a formally specified, abstract object; its vehicles are themselves abstract objects, like '1' and '0' (the binary digits themselves, rather than the numerals).

But that's no longer true for a physical system. A physical system doesn't manipulate '1' and '0', it manipulates physical properties (say, voltage levels) that we take to stand for or represent '1' or '0'. It's here that the ambiguity comes in.
I appreciate the effort you made to once again detail your argument, but I find the view that there is some kind of fundamental difference between an abstract Turing machine and a physical one because the former manipulates abstract symbols and the latter manipulates physical representations to be incoherent. They are exactly the same. The Turing machine defines precisely what computation is, independent of what the symbols might actually mean, provided only that there is a consistent interpretation (any consistent interpretation!) of the semantics.

Let me re-iterate one of my previous comments. Our disagreement seems to arise from your conflation of "computation" with "algorithm". The question of what a "computation" is, in the most fundamental sense, is quite a different question from what problem is being solved by the computation. Your obsession with the difference between your f and f' functions is, at its core, not a computational issue, but a class-of-problem issue.