Downloading Your Consciousness Just Before Death.

Ok, but the symbol manipulating turing machine that implements an ANN (that does something) doesn’t seem any different from the turing machine that implements a car simulation using non-ANN techniques. In both cases the same symbols are being used (e.g. 1’s and 0’s) and they have no intrinsic meaning (as this thread has shown).

It seems like the phrase symbolic computation is implying a higher level of abstraction, one in which the symbols are more directly mapped to real world entities?

That’s an outrageous attempt to create hugely inaccurate juxtapositions of the things I’ve been saying, and using intentionally sloppy language to try to create the impression of inconsistency (what exactly does “deal in semantics” mean?). It’s astonishing that you put that much effort into this pointless exercise.

The issues are complex and plain language is sometimes subject to ambiguities, especially when writing quickly, but it’s hard to believe that there could be genuine misinterpretation to quite this extent. OTOH, you appear to have a good deal of inconsistency and backtracking yourself, going fairly rapidly from characterizing CTM as being “wrong” to being “useful” while nonetheless characterizing it incorrectly as merely a useful model, which is exactly NOT how it’s generally regarded.

But on your specific points:

[1] is a straightforward statement of what Turing-equivalent computation is.

[2] is a statement about linguistic semantics (I notice here that the distinction you made earlier about the word “semantics” having different formal meanings in computer science than in general speech has been conveniently forgotten), and is the kind of observation frequently and correctly made about AI, that notwithstanding the fact that it works with apparently meaningless symbols, it nonetheless sometimes appears to express meaningful understanding of its problem domain. How this happens, or if it truly happens at all, is an ongoing philosophical debate, precisely the kind that Searle’s Chinese Room argument was supposed to answer in the negative (but fails to do). It’s certainly not a matter that can be dismissed out of hand or philosophers like Searle and Dreyfus would not have been going on about it for most of their careers. Dismissal out of hand appears to be your game, not mine.

[3] is just a restatement of [2].

[4] states that CTM is not just a useful model or a metaphor as you wrongly implied, but intended to be a literal description of cognition as a computational paradigm, or as Fodor put it, syntactic operations on mental representations. But as Fodor repeatedly said, and as I’ve said throughout, this doesn’t mean that everything about the mind is necessarily computational, or at least that everything about the mind can be described by CTM, but only that many important cognitive processes can be so described. And even there, Fodor doesn’t believe that his version of CTM as presently formulated is anywhere near a complete description.

[5] is perfectly consisent with what I just said in [4].

[6] and [7] is your feigning “confusion” over a matter I just finished explaining, that Chalmers’ statement of computational sufficiency (a general principle) is not the same as CTM (a family of specific theories, acknowledged as far from a complete description of the mind).

As for “the topic of the thread”, much of the discussion had segued into an argument about the nature of cognition, as I already pointed out but you chose to ignore, and that’s a more specific argument and at least one that can be had based on tangible research. Whereas no one knows anything about consciousness. So most of my arguments, like yours, have been about the subjects of computation and cognition, not consciousness as such.

The one thing I will acknowledge here, in all frankness, is that I’m a bit more doubtful in retrospect that there would be quite as much universal agreement with Chalmers’ “computational sufficiency” principle as I had implied. I do believe he’s overstating the case by including “mind” in that statement, and I think a stronger argument (and one that would be much more widely supported) would be one where he had merely said, as I noted before, that “the right kind of computational structure suffices for the possession of a wide variety of [cognitive] mental properties”.

I put in the same effort independently but he posted first.

If two different people are independently seeing that much contradiction, is it possible that you may have a portion of responsibility in the miscommunication?

And you keep calling things “pointless” and “not relevant” when people try to clarify your position. Isn’t that the preferred approach, to ask for a person to clarify their position vs continuing posting under incorrect assumptions?

wolfpup, here’s another area where it’s not clear what your position is due to posts that seem to imply different positions about “computation”:

From a previous thread where you challenged me on my usage of the word “computation”:

From this thread:

and

Bolding and size in above quotes added by me.

In the previous thread, a calculator does not “compute” but in this thread, HMHW’s box does compute.
If we can just get clarification about what is a computation and what isn’t from your perspective, then we can proceed based on that clarification.

I would think it would be obvious that the notion of “computation” has both formal and informal definitions, and any alleged contradiction just arises from this terminological ambiguity. A typical calculator doesn’t compute in the formal Turing-equivalent sense (and neither does HMHW’s box) because the formal sense of computation involves the concept of a stored program executing a series of stepwise syntactical operations on symbols, which Turing formalized as symbols on a tape.

Perhaps you can ask HMHW why he alleges that his box “computes” both the functions f and* f’* (and many others, according to the whim of the observer), since this is his example, not mine. My explanation is simply that it’s a looser use of the term so that he can illuminate his argument with a simple instance of the alleged observer-dependency of symbolic representations, and I’m fine with calling it a “computation” for that purpose. But the box is clearly neither Turing equivalent or in any sense a stored program computer.

But a “computation” does not need to be performed by a turing equivalent system, it’s just that the turing equivalent system is capable of computing anything that is computable.

A purpose specific system that is not turing equivalent can still perform the computations it was designed for.

You seem to be using a different definition than the one I see used by academics that I read. You seem to be saying that only turing equivalent systems perform computations, is this correct?

I don’t know what useful objective you’re pursuing with this line of interrogation, which started off with some strange accusation that I contradicted myself about what computation means. The appropriate definition at least partly depends on the context of the conversation. When discussing the Computational Theory of Mind, the prominent theorists that I know specifically rely on Turing’s definition via his eponymous machine to define precisely what they mean, thus avoiding philosophical detours like whether a rock performs computations. If the academics that you read define computation some other way, you should try reading the ones who are concerned with CTM. Thus:
At its core, though, RTM is an attempt to combine Alan Turing’s work on computation with intentional realism (as outlined above). Broadly speaking, RTM claims that mental processes are computational processes, and that intentional states are relations to mental representations that serve as the domain of such processes. On Fodor’s version of RTM, these mental representations have both syntactic structure and a compositional semantics. Thinking thus takes place in an internal language of thought.

Turing demonstrated how to construct a purely mechanical device that could transform syntactically-individuated symbols in a way that respects the semantic relations that exist between the meanings, or contents, of the symbols. Formally valid inferences are the paradigm. For instance, modus ponens can be realized on a machine that’s sensitive only to syntactic properties of symbols. The device thus doesn’t have “access” to the symbols’ semantic properties, but can nevertheless transform the symbols in a truth-preserving way. What’s interesting about this, from Fodor’s perspective, is that mental processes also involve chains of thoughts that are truth-preserving.
https://www.iep.utm.edu/fodor/

Having a common understanding of terms and people’s positions seems like a pretty useful objective to support a good conversation.

But Turing never stated that only a turing equivalent machine performs computations. He did establish that a turing complete machine could compute any computable function, but that says nothing about whether lesser machines perform computations or not.

So where are you seeing anyone claim that more limited machines like calculators can’t be said to compute the functions they were designed to compute?

You seem to be claiming:
1 - The function of addition performed on a calculator is not considered a computation
2 - The function of addition performed on one of today’s personal computers is considered a computation

No. Both can be considered “computations” in the trivial sense in which “computation” is just synonymous with “calculation”. They can also be regarded as computations in the equally trivial sense that both can be interpreted as operations on symbols. Turing’s insights defined a much more formal notion of computation in terms of a Logical Computing Machine (LCM – which became known as the Turing Machine) and its practical incarnation, the PCM, aka the Automatic Digital Computing Machine. The difference from a calculator is not in any one particular calculation, but in the fact that Turing’s model describes a stored-program digital computer which executes a series of stored instructions and undergoes state transitions in response to the syntax of stored symbols. It became the iconic definition of what a stored-program digital computer is as opposed to a calculator.

While originally proposed to advance his theory of computable numbers, he later concluded that such a machine could make (non-numerical) logical inferences and ultimately exhibit intelligent behavior that was far beyond merely doing calculations. The explicit reference to the Turing machine in descriptions of CTM is to make it explicitly clear that this is what is meant by the “computational” part of Computational Theory of Mind. The description I quoted above in #547 giving the basic outline of Fodor’s Representational Theory would not be possible without the explicit understanding that in this context this is what we mean by “computational”. I don’t know how I can possibly be any more clear than that.

But if instantiating the relevant mental properties on a Turing machine always required the simulation of an ANN, then one might rightly hold that these properties are instantiated by virtue of the ANN’s structure, rather than the TM’s symbol-manipulation, it seems to me.

I made that effort to impress upon you that, while I have no doubt it seemed to you like you were proposing a consistent story, your actual posts have made it hard to discern what that story is, and thus, answer appropriately. For instance, there may be a story where it’s reasonable to both claim that ‘the brain literally is a computer’ and that ‘the brain isn’t wholly computational’, but on the face of it, these are contradictory statements. Hence, my hope to get you to actually provide the story by highlighting what seemed contradictory to me.

Which seems OK for you, but you immediately balk at my usage of plain language (‘deal in semantics’).

My argument, from the start, has only been that there are some aspects of the brain—notably, its interpretational capacity—that can’t be computational. From my very first post in this thread (relevant parts highlighted):

This simply and rather explicitly argues that minds can’t be completely computational, because they possess a capacity that can’t be realized computationally. As soon as I noticed that you believed I was arguing for a rejection of computation-based cognitive science tout court, I tried to clarify—but to no avail, it seems.

You told me yourself it’s not relevant. Besides, earlier on, you agreed with me defining semantics in terms of the meanings of symbols:

But that’s the same sort of semantics my box needs to have in order to compute any distinct functions—symbols (lamp or switch-states) mapped to their meaning (numbers).

But then, what’s the relevance of appealing to Watson at all?

Right, you dismiss by calling people ‘nitwits’, instead.

So, explain it! How does ‘the brain is literally a computer’ not mean that everything about it is computational? Because if by ‘the brain is literally a computer’ you just mean ‘a part of the brain is literally a computer’, then that’s perfectly consistent with my argument, with the part that’s not a computer supplying the interpretation of symbols. Otherwise, what is it that makes it a computer if it’s not wholly computational?

And yet, you explicitly include Fodor in the list of people that agree with Chalmers regarding computational sufficiency:

And I hope you’ll at least admit, in light of this, that yes, some people have claimed that computation is sufficient for mind, and it’s that claim that my arguments are directed against.

My argument, from the beginning, has been that there’s at least one aspect of the mind that’s not realized by computation. Nothing else. If you actually agree with that, I wonder why you ever decided to challenge it, and continued to do so even after my repeated attempts to point out that no, this doesn’t overturn all of cognitive science.

A computer need not be a stored-program device to compute. Turing’s formulation picks out a range of functions that can be realized by mechanical computation; anything that implements any of these functions (which can be characterized without reference to Turing machines—for instance, via the Lambda calculus, or Gödel’s recursive functions, and so on) is a ‘computer’ properly so called. Thus, the calculator is just as much a computer as any Turing machine (just not a universal one).

In fact, it’s usual to disavow the necessity of programmability in the CTM:

Ok, I think I’m understanding you.

You’re saying that any of the portions of the brain that have purpose built circuits are not computational, only the portions of the brain that are general purpose and running a stored program are computational.

And, because you agree with Fodor, the global reasoning portion that seems to require the power of turing style computation is also not computational because of Fodor’s objections (which is really unfortunate because that is one area that really could benefit from computation vs a hard wired circuit).

So, if Fodor’s modules have circuits built specifically for their required functions, then there is no computation happening in the brain.

Because you do believe some part of the brain is computational, that means you must think that at least some of Fodor’s modules are not circuits built specifically for their function but rather a general purpose turing style computer that is executing a stored program to achieve that modules functional purpose.

Which leads to some questions:
1 - Which modules do you think use turing style stored program computation?
2 - Is there any empirical evidence that they are turing style stored program computations vs circuits built specifically for that purpose?
3 - Why would nature create a general purpose stored program turing style computer and only use it for some modules? It takes more energy, so that’s not efficient. It’s more powerful than a purpose built circuit so it seems that it could have been used in other places like the global reasoning center, seems like a waste that it’s stuck out in modules #47 and #225 only.

I’m getting pretty tired (literally) of all the back and forth that has long since digressed from the original topic, but since you seem to feel that this one issue somehow encapsulates some important aspect of my alleged inconsistency, I’ll try to explain it. None of this is new or particularly revelatory; my purpose is to show that there’s no actual inconsistency in my claims.

The statement that “the brain is literally a computer” wasn’t actually mine, though it was indeed part of a quote I cited from the SEP. Those aren’t exactly the words I would have used myself, precisely because they invite the kind of misinterpretation you’re making. The brain is certainly nothing like a digital computer; its spike trains and neural firing thresholds are not only entirely different physical mechanisms from logic gates, but they combine elements of both analog and digital behaviors. What can be said, however, is that they are sufficient to implement, for certain mental processes, a paradigm of syntactical operations on mental representations that exactly parallels the syntactical operations on symbols performed by a digital computer. It can therefore be said – as I have repeatedly said – that CTM is meant to be literally a description of those mental processes, and that the rules, or algorithms, that direct those operations in a digital computer are paralleled in the mind by what Fodor has called “the language of thought”.

It should be clear from this that it leaves open many aspects of the mind that are not amenable to computational explanations, and certainly not explainable by any current formulations of CTM. We know that many aspects of behavior are innate (hence not the product of computational processes), that sensory inputs and emotional states create biochemically induced changes in brain function, and so on. None of these things have any parallel in general-purpose digital computers, but none of them preclude CTM from being a very important explanation of how the cognitive mind works.

Many believe that our ability to acquire language is innate. This means that Fodor’s language of thought is executed by a non-computational process?
Let’s look at your innate=non-computational statement. It has been proven by researchers that a specific recurrent neural network of about 1,000 neurons is turing complete.

Your claim leads us to the following:
1 - A turing complete recurrent neural network that arises (weight adjustments, connections etc.) due to learning or some other dynamic process is a computer that performs computational processes.

2 - But a turing complete recurrent neural network that arises (weight adjustments, connections etc.) due to DNA and initial brain development is not a computer that performs computational processes.

For those reading this thread, these are my predictions about responses from wolfpup:
1 - No response
2 - A vague statement about how I clearly don’t understand CTM
3 - A statement about how he was talking about only those innate things that aren’t computational, not the ones that are computational
4 - A statement about how discussions of “innate” are a ridiculous tangent that have nothing to do with this thread
Whereas the only logical response is as follows:
“You’re right, whether a brain circuit is innate or learned does not tell us whether it’s turing complete or even a computational system.”

This is exactly the sort of equivocation that makes conversation with you difficult and eventually impossible.

Our ability to acquire language – which incidentally applies specifically to spoken natural language and not written language – is indeed probably innate, but that has absolutely nothing to do with Fodor’s “language of thought” premise. Like, zero, nothing, not even close, nada. In exactly the same sense that you might try to claim that spoken natural language is exactly like a programming language. Really? We are all born with an innate understanding of FORTRAN? :rolleyes:

I feel like I’m wasting my time with this one last sentence, but in the feeble hope that there’s actually some sincere interest here to understand, the “language of thought” is analogous to the lexical and grammatical formalisms of a computer programming language that operates within the mind, which is a completely different universe from our evolved natural language skills. Really, the only thing in common is an obtuse attempt to link the two because they both have the word “language” in their descriptors.

Ok, let’s assume you are correct that our language mechanism is unrelated to the mechanisms used in LOT.

You still ignored the key point of my post and even cut it out of your response.
Let’s try again:

Any response to this counter to your point that innate=non-computational?

Do you truly believe that an innate turing complete system is non-computational because it’s innate?

Doesn’t that seem like it doesn’t make sense?

OK, let’s try again, by all means, since I have a bunch of free time right now. Do you really want to go with the idea that this was “the key point” of your post? Terrific! Let’s go with that.

If this is another of your attempted gotchas, your “key point” manages to be wrong on not just one, but on two different levels.

First, conclusion #2 does NOT follow from statement #1. Just because some neural nets can be Turing complete (ignoring the capacity limitations of any finite physical system) does NOT imply that any neural net will be Turing complete. That’s just a logical fallacy.

Second, my argument about CTM has nothing to do with whether innate processes are computational or not. The CTM argument is that the central processes underlying cognition are at their core computational. The process of thinking, for instance, is proposed to be the same kind of symbol manipulation that a digital computer does. That other processes in the brain may not be computational is an acknowledgement of the limitations of CTM, not a requirement for it to be true! So your “key point” really makes no sense.

My objection was merely to your point that an innate system is by definition non-computational.

If I feel you’ve stated something that isn’t correct or logical, then I’m going to push back on that. The only way to have clear communication is working through these issues and coming to a common understanding of where we have real disagreement and where it was just sloppy or misinterpreted communication.

For example, I made an assumption that connected language mechanisms with LOT mechanisms and you rightfully pushed back on that because you’re correct, they aren’t necessarily the same mechanism, they could be different.

I don’t consider that a “gotcha” - I consider that a valid part of the process for all parties to have clear thought and communication.

1 - This response doesn’t relate to my question.
2 - Example #2 is an example not a conclusion
3 - My example #1 and example #2 are both turing complete recurrent neural networks - there are no non-turing complete neural networks in my examples - so those are not relevant to my question to you about innateness
4 - My two examples are just two specific examples of turing complete neural networks, one innate and one not-innate - nowhere did I make any claim about all neural networks being turing complete - I have no idea how you can read those two examples and come to that conclusion - it make no sense - is it possible you read it quickly and misunderstood what I was asking?
Let’s try again:
You stated that systems in the brain that are innate are by definition not computational.
I provided two examples of turing complete recurrent neural networks:
Example #1 was an innate turing complete recurrent neural network
Example #2 was a non-innate turing complete recurrent neural network

Your claim is that:
Example #1 turing complete rnn does NOT perform computations
Example #2 turing complete rnn DOES perform computations

This makes no sense because both are turing complete rnn’s, how can one perform computations and the other one doesn’t?

Two points:
1 - Even if it’s completely unrelated to CTM, we should be clear and accurate about our positions and the statement you made was incorrect and needed to be corrected.

2 - It does actually have an impact on CTM but really more from a “gatcha” perspective. If we allow that incorrect assumption to exist, then anything you say about CTM can be discarded as non-computational by invoking the “ok, but that CTM mechanism is innate so it’s not computational.” But that isn’t really a fair thing to say because it can be proven that innateness does not impact whether something is computational or not, and it doesn’t really get us anywhere with respect to the challenging and interesting problems related to this debate.

So it is important to clear things like that up. Getting clarity and agreement on the foundational elements allows us to get to the bigger items.

Except Fodor disagrees with this statement, he says the central processes underlying cognition are non-computational due to the global reasoning issue.

He does think the various modules are computational.

Kind of.

This is where you seem to imply that Fodor was saying that only a turing complete system can perform these computations (e.g. a calculator doesn’t compute).

The reason I pushed back on that (and HMHW also did) is because a turing complete machine just sets the boundaries around the set of functions that can be computed by that type of machine. It doesn’t mean that lesser machines don’t perform computations also, it just means that they can’t compute the full set of computable functions.

I don’t want to put words in Fodor’s mouth, but I believe he would agree that if a module is not turing complete but does perform the resolution of various computable functions using symbols and syntax, then he would consider that a computation.

wolfpup, any response to this?

It seems like the only logical response is “you’re right, innate does not imply non-computational, they are independent.”

This is the same thing I was asking you about previously that you never clarified.

Fodor’s views changed quite a bit, do you think Fodor’s initial views are correct or his later views (that include the above issue with global reasoning).

wolfpup, not sure if you are coming back to this thread or not, but if you do, here’s the summary of open questions for you:

1 - Fodor did not think global reasoning was computational, you stated that you think the central processes for cognition are computational, do you disagree with Fodor or do you really mean the modules when you say “central”?

2 - You think that a turing complete rnn that is innate is non-computational because it’s innate - can you defend this position? seems almost impossible to defend

3 - You think that a calculator doesn’t compute, that only functions calculated on a turing machine are considered computations - can you find anyone else in philosophy of mind or neuroscience that supports this position? I’ve searched but I haven’t been able to find support for this position

I have no idea why you’re so obsessive about this or what point you’re trying to make, to the extent of repeatedly demanding answers to your challenges. And it seems that every time I make the effort to answer a question, it only raises two more, and more demands for more answers.

So I’ll address these but this is probably the last time. I suggest we all go our separate ways with our respective disagreements.

By “central” I meant “centrally important” as a theory, not in reference to any particular cognitive level. The implication that Fodor believed cognition isn’t computational is absurd, as the Representational Theory was one of the major accomplishments of his career: “Fodor developed two theories that have been particularly influential across disciplinary boundaries. He defended a “Representational Theory of Mind,” according to which thinking is a computational process defined over mental representations that are physically realized in the brain. On Fodor’s view, these mental representations are internally structured much like sentences in a natural language, in that they have both a syntax and a compositional semantics.

I have never seen Fodor claim that “global reasoning” is non-computational, though he might have at some point – he was certainly open about the fact that much of the mind is non-computational or at least unexplained by current CTM theories, including much of cognitive psychology. I know the book you cite makes that claim, but it wouldn’t be the first time that lesser philosophers have misinterpreted Fodor, and the author is frank about the fact that Fodor would disagree with much of what she has to say. What Fodor specifically postulated was the modularity of mental processes, the modularity essentially characterized by information encapsulation, also referred to as cognitive impenetrability – the inability of one module’s information to penetrate into another. One of the oft-cited examples of this is the persistence of optical illusions in the visual processing module even when we know (in a different cognitive module) that they are illusions. What Fodor explicitly disclaimed was that higher-level cognition was also modular. Whether he ever also claimed that it wasn’t computational I have no idea.

I don’t have to defend it because I never said it, and it wouldn’t have mattered even if I did. What I said was that “we know that many aspects of behavior are innate (hence not the product of computational processes)”. That is, unlike learning, where according to CTM we acquire knowledge through putative computational processes, or thinking, where we reach conclusions through similar computational processes, innate behaviors aren’t produced by computational processes, but are hardwired. Whether they are themselves computational is irrelevant to my point – they may or may not be; my point was that CTM is incomplete and doesn’t explain everything about the mind.

I’ve already said (pretty clearly) that there are many different definitions of computation, and the appropriate one depends on the context of the discussion. The relevance of the Turing model here is that it’s specifically cited as the relevant meaning of “computation” in CTM, and if you haven’t seen that in the literature, look harder. Just look at section 4 of the above-cited article on Fodor, for example. Or the very first part of the article on CTM in the SEP.