Reply
 
Thread Tools Display Modes
  #551  
Old 07-08-2019, 12:43 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
No. Both can be considered "computations" in the trivial sense in which "computation" is just synonymous with "calculation". They can also be regarded as computations in the equally trivial sense that both can be interpreted as operations on symbols. Turing's insights defined a much more formal notion of computation in terms of a Logical Computing Machine (LCM -- which became known as the Turing Machine) and its practical incarnation, the PCM, aka the Automatic Digital Computing Machine. The difference from a calculator is not in any one particular calculation, but in the fact that Turing's model describes a stored-program digital computer which executes a series of stored instructions and undergoes state transitions in response to the syntax of stored symbols. It became the iconic definition of what a stored-program digital computer is as opposed to a calculator.

While originally proposed to advance his theory of computable numbers, he later concluded that such a machine could make (non-numerical) logical inferences and ultimately exhibit intelligent behavior that was far beyond merely doing calculations. The explicit reference to the Turing machine in descriptions of CTM is to make it explicitly clear that this is what is meant by the "computational" part of Computational Theory of Mind. The description I quoted above in #547 giving the basic outline of Fodor's Representational Theory would not be possible without the explicit understanding that in this context this is what we mean by "computational". I don't know how I can possibly be any more clear than that.
Ok, I think I'm understanding you.

You're saying that any of the portions of the brain that have purpose built circuits are not computational, only the portions of the brain that are general purpose and running a stored program are computational.

And, because you agree with Fodor, the global reasoning portion that seems to require the power of turing style computation is also not computational because of Fodor's objections (which is really unfortunate because that is one area that really could benefit from computation vs a hard wired circuit).

So, if Fodor's modules have circuits built specifically for their required functions, then there is no computation happening in the brain.

Because you do believe some part of the brain is computational, that means you must think that at least some of Fodor's modules are not circuits built specifically for their function but rather a general purpose turing style computer that is executing a stored program to achieve that modules functional purpose.

Which leads to some questions:
1 - Which modules do you think use turing style stored program computation?
2 - Is there any empirical evidence that they are turing style stored program computations vs circuits built specifically for that purpose?
3 - Why would nature create a general purpose stored program turing style computer and only use it for some modules? It takes more energy, so that's not efficient. It's more powerful than a purpose built circuit so it seems that it could have been used in other places like the global reasoning center, seems like a waste that it's stuck out in modules #47 and #225 only.
  #552  
Old 07-08-2019, 11:49 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by Half Man Half Wit View Post
I made that effort to impress upon you that, while I have no doubt it seemed to you like you were proposing a consistent story, your actual posts have made it hard to discern what that story is, and thus, answer appropriately. For instance, there may be a story where it's reasonable to both claim that 'the brain literally is a computer' and that 'the brain isn't wholly computational', but on the face of it, these are contradictory statements. Hence, my hope to get you to actually provide the story by highlighting what seemed contradictory to me.
I'm getting pretty tired (literally) of all the back and forth that has long since digressed from the original topic, but since you seem to feel that this one issue somehow encapsulates some important aspect of my alleged inconsistency, I'll try to explain it. None of this is new or particularly revelatory; my purpose is to show that there's no actual inconsistency in my claims.

The statement that "the brain is literally a computer" wasn't actually mine, though it was indeed part of a quote I cited from the SEP. Those aren't exactly the words I would have used myself, precisely because they invite the kind of misinterpretation you're making. The brain is certainly nothing like a digital computer; its spike trains and neural firing thresholds are not only entirely different physical mechanisms from logic gates, but they combine elements of both analog and digital behaviors. What can be said, however, is that they are sufficient to implement, for certain mental processes, a paradigm of syntactical operations on mental representations that exactly parallels the syntactical operations on symbols performed by a digital computer. It can therefore be said -- as I have repeatedly said -- that CTM is meant to be literally a description of those mental processes, and that the rules, or algorithms, that direct those operations in a digital computer are paralleled in the mind by what Fodor has called "the language of thought".

It should be clear from this that it leaves open many aspects of the mind that are not amenable to computational explanations, and certainly not explainable by any current formulations of CTM. We know that many aspects of behavior are innate (hence not the product of computational processes), that sensory inputs and emotional states create biochemically induced changes in brain function, and so on. None of these things have any parallel in general-purpose digital computers, but none of them preclude CTM from being a very important explanation of how the cognitive mind works.
  #553  
Old 07-08-2019, 02:49 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
We know that many aspects of behavior are innate (hence not the product of computational processes),...
Many believe that our ability to acquire language is innate. This means that Fodor's language of thought is executed by a non-computational process?


Let's look at your innate=non-computational statement. It has been proven by researchers that a specific recurrent neural network of about 1,000 neurons is turing complete.

Your claim leads us to the following:
1 - A turing complete recurrent neural network that arises (weight adjustments, connections etc.) due to learning or some other dynamic process is a computer that performs computational processes.

2 - But a turing complete recurrent neural network that arises (weight adjustments, connections etc.) due to DNA and initial brain development is not a computer that performs computational processes.



For those reading this thread, these are my predictions about responses from wolfpup:
1 - No response
2 - A vague statement about how I clearly don't understand CTM
3 - A statement about how he was talking about only those innate things that aren't computational, not the ones that are computational
4 - A statement about how discussions of "innate" are a ridiculous tangent that have nothing to do with this thread


Whereas the only logical response is as follows:
"You're right, whether a brain circuit is innate or learned does not tell us whether it's turing complete or even a computational system."
  #554  
Old 07-08-2019, 03:32 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by RaftPeople View Post
Many believe that our ability to acquire language is innate. This means that Fodor's language of thought is executed by a non-computational process?

...

For those reading this thread, these are my predictions about responses from wolfpup:
1 - No response
2 - A vague statement about how I clearly don't understand CTM
3 - A statement about how he was talking about only those innate things that aren't computational, not the ones that are computational
4 - A statement about how discussions of "innate" are a ridiculous tangent that have nothing to do with this thread
This is exactly the sort of equivocation that makes conversation with you difficult and eventually impossible.

Our ability to acquire language -- which incidentally applies specifically to spoken natural language and not written language -- is indeed probably innate, but that has absolutely nothing to do with Fodor's "language of thought" premise. Like, zero, nothing, not even close, nada. In exactly the same sense that you might try to claim that spoken natural language is exactly like a programming language. Really? We are all born with an innate understanding of FORTRAN?

I feel like I'm wasting my time with this one last sentence, but in the feeble hope that there's actually some sincere interest here to understand, the "language of thought" is analogous to the lexical and grammatical formalisms of a computer programming language that operates within the mind, which is a completely different universe from our evolved natural language skills. Really, the only thing in common is an obtuse attempt to link the two because they both have the word "language" in their descriptors.
  #555  
Old 07-08-2019, 04:43 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
This is exactly the sort of equivocation that makes conversation with you difficult and eventually impossible.

Our ability to acquire language -- which incidentally applies specifically to spoken natural language and not written language -- is indeed probably innate, but that has absolutely nothing to do with Fodor's "language of thought" premise. Like, zero, nothing, not even close, nada. In exactly the same sense that you might try to claim that spoken natural language is exactly like a programming language. Really? We are all born with an innate understanding of FORTRAN?

I feel like I'm wasting my time with this one last sentence, but in the feeble hope that there's actually some sincere interest here to understand, the "language of thought" is analogous to the lexical and grammatical formalisms of a computer programming language that operates within the mind, which is a completely different universe from our evolved natural language skills. Really, the only thing in common is an obtuse attempt to link the two because they both have the word "language" in their descriptors.
Ok, let's assume you are correct that our language mechanism is unrelated to the mechanisms used in LOT.

You still ignored the key point of my post and even cut it out of your response.


Let's try again:
Quote:
Originally Posted by raftpeople
Let's look at your innate=non-computational statement. It has been proven by researchers that a specific recurrent neural network of about 1,000 neurons is turing complete.

Your claim leads us to the following:
1 - A turing complete recurrent neural network that arises (weight adjustments, connections etc.) due to learning or some other dynamic process is a computer that performs computational processes.

2 - But a turing complete recurrent neural network that arises (weight adjustments, connections etc.) due to DNA and initial brain development is not a computer that performs computational processes.

Any response to this counter to your point that innate=non-computational?

Do you truly believe that an innate turing complete system is non-computational because it's innate?

Doesn't that seem like it doesn't make sense?
  #556  
Old 07-09-2019, 10:18 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by RaftPeople View Post
Ok, let's assume you are correct that our language mechanism is unrelated to the mechanisms used in LOT.

You still ignored the key point of my post and even cut it out of your response.


Let's try again:



Any response to this counter to your point that innate=non-computational?

Do you truly believe that an innate turing complete system is non-computational because it's innate?

Doesn't that seem like it doesn't make sense?
OK, let's try again, by all means, since I have a bunch of free time right now. Do you really want to go with the idea that this was "the key point" of your post? Terrific! Let's go with that.

Quote:
Originally Posted by RaftPeople View Post
Let's look at your innate=non-computational statement. It has been proven by researchers that a specific recurrent neural network of about 1,000 neurons is turing complete.

Your claim leads us to the following:
1 - A turing complete recurrent neural network that arises (weight adjustments, connections etc.) due to learning or some other dynamic process is a computer that performs computational processes.

2 - But a turing complete recurrent neural network that arises (weight adjustments, connections etc.) due to DNA and initial brain development is not a computer that performs computational processes.
If this is another of your attempted gotchas, your "key point" manages to be wrong on not just one, but on two different levels.

First, conclusion #2 does NOT follow from statement #1. Just because some neural nets can be Turing complete (ignoring the capacity limitations of any finite physical system) does NOT imply that any neural net will be Turing complete. That's just a logical fallacy.

Second, my argument about CTM has nothing to do with whether innate processes are computational or not. The CTM argument is that the central processes underlying cognition are at their core computational. The process of thinking, for instance, is proposed to be the same kind of symbol manipulation that a digital computer does. That other processes in the brain may not be computational is an acknowledgement of the limitations of CTM, not a requirement for it to be true! So your "key point" really makes no sense.
  #557  
Old 07-10-2019, 12:10 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
OK, let's try again, by all means, since I have a bunch of free time right now. Do you really want to go with the idea that this was "the key point" of your post? Terrific! Let's go with that.
My objection was merely to your point that an innate system is by definition non-computational.



Quote:
If this is another of your attempted gotchas, your "key point" manages to be wrong on not just one, but on two different levels.
If I feel you've stated something that isn't correct or logical, then I'm going to push back on that. The only way to have clear communication is working through these issues and coming to a common understanding of where we have real disagreement and where it was just sloppy or misinterpreted communication.

For example, I made an assumption that connected language mechanisms with LOT mechanisms and you rightfully pushed back on that because you're correct, they aren't necessarily the same mechanism, they could be different.

I don't consider that a "gotcha" - I consider that a valid part of the process for all parties to have clear thought and communication.


Quote:
First, conclusion #2 does NOT follow from statement #1. Just because some neural nets can be Turing complete (ignoring the capacity limitations of any finite physical system) does NOT imply that any neural net will be Turing complete. That's just a logical fallacy.
1 - This response doesn't relate to my question.
2 - Example #2 is an example not a conclusion
3 - My example #1 and example #2 are both turing complete recurrent neural networks - there are no non-turing complete neural networks in my examples - so those are not relevant to my question to you about innateness
4 - My two examples are just two specific examples of turing complete neural networks, one innate and one not-innate - nowhere did I make any claim about all neural networks being turing complete - I have no idea how you can read those two examples and come to that conclusion - it make no sense - is it possible you read it quickly and misunderstood what I was asking?


Let's try again:
You stated that systems in the brain that are innate are by definition not computational.
I provided two examples of turing complete recurrent neural networks:
Example #1 was an innate turing complete recurrent neural network
Example #2 was a non-innate turing complete recurrent neural network

Your claim is that:
Example #1 turing complete rnn does NOT perform computations
Example #2 turing complete rnn DOES perform computations

This makes no sense because both are turing complete rnn's, how can one perform computations and the other one doesn't?


Quote:
Second, my argument about CTM has nothing to do with whether innate processes are computational or not.
Two points:
1 - Even if it's completely unrelated to CTM, we should be clear and accurate about our positions and the statement you made was incorrect and needed to be corrected.

2 - It does actually have an impact on CTM but really more from a "gatcha" perspective. If we allow that incorrect assumption to exist, then anything you say about CTM can be discarded as non-computational by invoking the "ok, but that CTM mechanism is innate so it's not computational." But that isn't really a fair thing to say because it can be proven that innateness does not impact whether something is computational or not, and it doesn't really get us anywhere with respect to the challenging and interesting problems related to this debate.

So it is important to clear things like that up. Getting clarity and agreement on the foundational elements allows us to get to the bigger items.


Quote:
The CTM argument is that the central processes underlying cognition are at their core computational.
Except Fodor disagrees with this statement, he says the central processes underlying cognition are non-computational due to the global reasoning issue.

He does think the various modules are computational.


Quote:
The process of thinking, for instance, is proposed to be the same kind of symbol manipulation that a digital computer does.
Kind of.

This is where you seem to imply that Fodor was saying that only a turing complete system can perform these computations (e.g. a calculator doesn't compute).

The reason I pushed back on that (and HMHW also did) is because a turing complete machine just sets the boundaries around the set of functions that can be computed by that type of machine. It doesn't mean that lesser machines don't perform computations also, it just means that they can't compute the full set of computable functions.

I don't want to put words in Fodor's mouth, but I believe he would agree that if a module is not turing complete but does perform the resolution of various computable functions using symbols and syntax, then he would consider that a computation.
  #558  
Old 07-11-2019, 03:28 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by RaftPeople View Post
Let's try again:
You stated that systems in the brain that are innate are by definition not computational.
I provided two examples of turing complete recurrent neural networks:
Example #1 was an innate turing complete recurrent neural network
Example #2 was a non-innate turing complete recurrent neural network

Your claim is that:
Example #1 turing complete rnn does NOT perform computations
Example #2 turing complete rnn DOES perform computations

This makes no sense because both are turing complete rnn's, how can one perform computations and the other one doesn't?
wolfpup, any response to this?

It seems like the only logical response is "you're right, innate does not imply non-computational, they are independent."




Quote:
Except Fodor disagrees with this statement, he says the central processes underlying cognition are non-computational due to the global reasoning issue.

He does think the various modules are computational.
This is the same thing I was asking you about previously that you never clarified.

Fodor's views changed quite a bit, do you think Fodor's initial views are correct or his later views (that include the above issue with global reasoning).
  #559  
Old 07-12-2019, 01:44 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
wolfpup, not sure if you are coming back to this thread or not, but if you do, here's the summary of open questions for you:

1 - Fodor did not think global reasoning was computational, you stated that you think the central processes for cognition are computational, do you disagree with Fodor or do you really mean the modules when you say "central"?

2 - You think that a turing complete rnn that is innate is non-computational because it's innate - can you defend this position? seems almost impossible to defend

3 - You think that a calculator doesn't compute, that only functions calculated on a turing machine are considered computations - can you find anyone else in philosophy of mind or neuroscience that supports this position? I've searched but I haven't been able to find support for this position
  #560  
Old 07-12-2019, 04:22 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by RaftPeople View Post
wolfpup, not sure if you are coming back to this thread or not, but if you do, here's the summary of open questions for you:
I have no idea why you're so obsessive about this or what point you're trying to make, to the extent of repeatedly demanding answers to your challenges. And it seems that every time I make the effort to answer a question, it only raises two more, and more demands for more answers.

So I'll address these but this is probably the last time. I suggest we all go our separate ways with our respective disagreements.
Quote:
Originally Posted by RaftPeople View Post
1 - Fodor did not think global reasoning was computational, you stated that you think the central processes for cognition are computational, do you disagree with Fodor or do you really mean the modules when you say "central"?
By "central" I meant "centrally important" as a theory, not in reference to any particular cognitive level. The implication that Fodor believed cognition isn't computational is absurd, as the Representational Theory was one of the major accomplishments of his career: "Fodor developed two theories that have been particularly influential across disciplinary boundaries. He defended a "Representational Theory of Mind," according to which thinking is a computational process defined over mental representations that are physically realized in the brain. On Fodorís view, these mental representations are internally structured much like sentences in a natural language, in that they have both a syntax and a compositional semantics."

I have never seen Fodor claim that "global reasoning" is non-computational, though he might have at some point -- he was certainly open about the fact that much of the mind is non-computational or at least unexplained by current CTM theories, including much of cognitive psychology. I know the book you cite makes that claim, but it wouldn't be the first time that lesser philosophers have misinterpreted Fodor, and the author is frank about the fact that Fodor would disagree with much of what she has to say. What Fodor specifically postulated was the modularity of mental processes, the modularity essentially characterized by information encapsulation, also referred to as cognitive impenetrability -- the inability of one module's information to penetrate into another. One of the oft-cited examples of this is the persistence of optical illusions in the visual processing module even when we know (in a different cognitive module) that they are illusions. What Fodor explicitly disclaimed was that higher-level cognition was also modular. Whether he ever also claimed that it wasn't computational I have no idea.
Quote:
Originally Posted by RaftPeople View Post
2 - You think that a turing complete rnn that is innate is non-computational because it's innate - can you defend this position? seems almost impossible to defend
I don't have to defend it because I never said it, and it wouldn't have mattered even if I did. What I said was that "we know that many aspects of behavior are innate (hence not the product of computational processes)". That is, unlike learning, where according to CTM we acquire knowledge through putative computational processes, or thinking, where we reach conclusions through similar computational processes, innate behaviors aren't produced by computational processes, but are hardwired. Whether they are themselves computational is irrelevant to my point -- they may or may not be; my point was that CTM is incomplete and doesn't explain everything about the mind.
Quote:
Originally Posted by RaftPeople View Post
3 - You think that a calculator doesn't compute, that only functions calculated on a turing machine are considered computations - can you find anyone else in philosophy of mind or neuroscience that supports this position? I've searched but I haven't been able to find support for this position
I've already said (pretty clearly) that there are many different definitions of computation, and the appropriate one depends on the context of the discussion. The relevance of the Turing model here is that it's specifically cited as the relevant meaning of "computation" in CTM, and if you haven't seen that in the literature, look harder. Just look at section 4 of the above-cited article on Fodor, for example. Or the very first part of the article on CTM in the SEP.
  #561  
Old 07-14-2019, 02:26 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
I have no idea why you're so obsessive about this or what point you're trying to make, to the extent of repeatedly demanding answers to your challenges. And it seems that every time I make the effort to answer a question, it only raises two more, and more demands for more answers.
I ask repeatedly because you frequently don't answer the questions that represent significant challenges to what you have posted.



Quote:
What Fodor explicitly disclaimed was that higher-level cognition was also modular. Whether he ever also claimed that it wasn't computational I have no idea.
Ok, great progress, you were not aware of Fodor's position on abductive reasoning.

You seem to like Fodor so I would recommend reading his book "The Mind Doesn't Work That Way." He walks through this problem and others and argues that Turing style computing systems can't do it and he also argues that connectionism doesn't solve the problem either.

Basically, his opinion is that this global reasoning that human's do is unexplainable by mechanical systems and as mysterious as consciousness.

From my perspective, the argument is interesting because this type of reasoning is exactly the type of reasoning that I think most of us are picturing when we think of making an AI system. So Fodor thinks the highest value portion of cognition is non-computational.



Quote:
That is, unlike learning, where according to CTM we acquire knowledge through putative computational processes, or thinking, where we reach conclusions through similar computational processes, innate behaviors aren't produced by computational processes, but are hardwired. Whether they are themselves computational is irrelevant to my point -- they may or may not be; my point was that CTM is incomplete and doesn't explain everything about the mind.
Thanks, that is a helpful answer, it provides enough detail to understand what you were trying to say.

I think it introduces a very inconsistent view of computation to state that two identical computing systems are not identical due to the way the systems were instantiated.

Maybe there are no examples of this in our brain, but even if there aren't, it is mathematically possible, so it seems odd to hold that position.


Quote:
I've already said (pretty clearly) that there are many different definitions of computation, and the appropriate one depends on the context of the discussion. The relevance of the Turing model here is that it's specifically cited as the relevant meaning of "computation" in CTM,...
A calculator IS a Turing machine, you know that right? It's not a UNIVERSAL Turing machine, but it's a Turing machine that computes the functions it's designed to compute by performing syntactic operations on symbols.

Maybe you mean to say that only UTM's compute but that TM's do not compute?

I don't see Fodor or any other CTM supporter claiming that a TM does not compute but a UTM does compute.
  #562  
Old 07-14-2019, 04:17 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by RaftPeople View Post
A calculator IS a Turing machine, you know that right?
A rock is Turing machine, too -- you know that, right?

This is nonsensical muddled thinking, like most of the rest of your reply, but it's the most flagrant and easiest to explain, so at this point, in the interest of no longer wasting my time, it's the only one I'm going to bother with. The difference between a Turing machine (TM) and a universal Turing machine (UTM) is that a TM is analogous to an arbitrary computer running a fixed program, while a UTM is analogous to a RAM-based computer than can run any such program. Obviously a TM can be built to simulate any trivial calculation, such as in a calculator, or none at all, such as a rock, which can be thought of in your terms as a TM with only one state that never changes.

Neither a calculator nor a rock fulfills the central idea of the TM, that of executing instructions that operate on symbols in a sequence of discrete steps that are characterized by state transitions. This is where the power of the UTM comes from, and by the same token, the power of the computational paradigm and the digital computer. The UTM is what defines the concept of Turing completeness. So of course in using the Turing machine paradigm in CTM we are alluding to this central idea, that mental processes execute arbitrary computational processes in the manner of a digital computer.
  #563  
Old 07-15-2019, 02:03 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
A rock is Turing machine, too -- you know that, right?

This is nonsensical muddled thinking, like most of the rest of your reply, but it's the most flagrant and easiest to explain, so at this point, in the interest of no longer wasting my time, it's the only one I'm going to bother with. The difference between a Turing machine (TM) and a universal Turing machine (UTM) is that a TM is analogous to an arbitrary computer running a fixed program, while a UTM is analogous to a RAM-based computer than can run any such program. Obviously a TM can be built to simulate any trivial calculation, such as in a calculator, or none at all, such as a rock, which can be thought of in your terms as a TM with only one state that never changes.

Neither a calculator nor a rock fulfills the central idea of the TM, that of executing instructions that operate on symbols in a sequence of discrete steps that are characterized by state transitions. This is where the power of the UTM comes from, and by the same token, the power of the computational paradigm and the digital computer. The UTM is what defines the concept of Turing completeness. So of course in using the Turing machine paradigm in CTM we are alluding to this central idea, that mental processes execute arbitrary computational processes in the manner of a digital computer.
It's difficult to have a debate about computationalism with a person that thinks Alan Turing was wrong about computers, computation and Turing Machines.

The question I've been wondering throughout this thread is whether you have self-awareness. Do you ever read up on these topics like Turing Machines and reflect back on your position in the posts and identify the inconsistencies?

Do you ever think to yourself "hmmm, maybe I don't actually understand what I think I do"?
  #564  
Old 07-15-2019, 02:25 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by RaftPeople View Post
Do you ever think to yourself "hmmm, maybe I don't actually understand what I think I do"?
... says the guy who doesn't seem to understand the difference between a calculator and a stored-program computer. I'm done here.
  #565  
Old 07-15-2019, 11:01 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
... says the guy who doesn't seem to understand the difference between a calculator and a stored-program computer. I'm done here.
Let's review the evidence.

My post:
Quote:
A calculator IS a Turing machine, you know that right? It's not a UNIVERSAL Turing machine, but it's a Turing machine that computes the functions it's designed to compute by performing syntactic operations on symbols.

And you're conclusion from reading those two sentences is that I don't seem to understand the difference between a calculator and a UTM?



In summary:
Raftpeople says: a calculator is a TM but it's not a UTM
wolfpup responds: you don't seem to understand that a calculator isn't a UTM
  #566  
Old 07-22-2019, 04:41 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,863
I'm not sure how much value there is left to squeeze out of this thread, but I do feel it's worth noting that by my understanding of the definition of a Turing machine, a rock ain't one. Among other things, a rock lacks the infinite data tape that a proper Turing machine includes as part of its makeup.

There is a difference between "something that implements one or more functions" and "a Turing machine". The latter is a specific thing.

For the record a calculator is also not a Turing machine. It's actually impossible to make a real Turing machine in the physical world (infinite tapes being hard to come by), and even a finite approximation of a Turing machine requires it to be able to accept a program composed of user-defined commands as an input. (The initial state of its data tape being the other input.) So a standard ten-key calculator is not a Turing machine. A programmable calculator may include a Turing-equivalent processor, though.
  #567  
Old 07-22-2019, 07:07 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by begbert2 View Post
I'm not sure how much value there is left to squeeze out of this thread, but I do feel it's worth noting that by my understanding of the definition of a Turing machine, a rock ain't one. Among other things, a rock lacks the infinite data tape that a proper Turing machine includes as part of its makeup.

There is a difference between "something that implements one or more functions" and "a Turing machine". The latter is a specific thing.

For the record a calculator is also not a Turing machine. It's actually impossible to make a real Turing machine in the physical world (infinite tapes being hard to come by), and even a finite approximation of a Turing machine requires it to be able to accept a program composed of user-defined commands as an input. (The initial state of its data tape being the other input.) So a standard ten-key calculator is not a Turing machine. A programmable calculator may include a Turing-equivalent processor, though.
A turing machine is just a model to reason about computation. It's just a theoretical machine that performs syntactic operations on symbols.

A universal turing machine has the capability to simulate all other turing machines, which means it can compute every computable function.

A turing machine that isn't a universal turing machine can't compute all computable functions, it can only compute the functions it was designed to compute (like a calculator).



Summary:
There is a distinction between a turing machine and a universal turing machine.
  #568  
Old 07-22-2019, 07:20 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
If you read this page you will see some very simple examples of Turing machines that Turing created. Simple single purpose machines, not universal machines.

https://en.wikipedia.org/wiki/Turing..._first_example
  #569  
Old 07-22-2019, 07:45 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by begbert2 View Post
I'm not sure how much value there is left to squeeze out of this thread, but I do feel it's worth noting that by my understanding of the definition of a Turing machine, a rock ain't one. Among other things, a rock lacks the infinite data tape that a proper Turing machine includes as part of its makeup.

There is a difference between "something that implements one or more functions" and "a Turing machine". The latter is a specific thing.

For the record a calculator is also not a Turing machine. It's actually impossible to make a real Turing machine in the physical world (infinite tapes being hard to come by), and even a finite approximation of a Turing machine requires it to be able to accept a program composed of user-defined commands as an input. (The initial state of its data tape being the other input.) So a standard ten-key calculator is not a Turing machine. A programmable calculator may include a Turing-equivalent processor, though.
Since any computation performed by a calculator can also be performed by a Turing machine, one can infer that the calculator -- or some even more trivially simple calculation device -- is equivalent to that particular Turing machine. Hence my intentionally silly example that the same applies to a rock, which could be described as a Turing machine with only one state, the halt state (though a purist would argue that a Turing machine technically must have at least two states). Of course neither a rock nor a calculator was what Turing was defining with his general abstraction, the power of which is best illustrated by the universal Turing machine, which is nothing more than a Turing machine that interprets both the action table and the I/O tape of any other (fixed-function) Turing machine. Only the UTM is Turing-complete, and this is the model for the instruction sets and programming languages of digital computers, and the processes of cognition according to CTM; this is what we mean by "computation" in those contexts. That's why Raftpeople introducing a simple electronic calculator into this discussion is just stupid.

But I'm sure Raftpeople will be along in a moment to explain to you that you apparently believe that "Alan Turing was wrong about computers, computation and Turing Machines" and you should do more reading because you don't understand anything. Trust me, this argument is now a total waste of time.
  #570  
Old 07-22-2019, 09:03 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
...nor a calculator was what Turing was defining with his general abstraction,...
This page includes Turing machines created by Turing.
https://en.wikipedia.org/wiki/Turing..._first_example

After reading that page, would you conclude that Turing thought that only UTM's are Turing machines?


Quote:
...and the processes of cognition according to CTM; this is what we mean by "computation" in those contexts.
So, you're saying that CTM only considers functions computed by a UTM to be a computation. If that same function is computed by a TM, then it's not considered a computation. Correct?

Because there is a distinction between that position, and the position that the brain computes some functions with a UTM even if other functions are computed with TM's.
  #571  
Old 07-23-2019, 12:15 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
Of course neither a rock nor a calculator was what Turing was defining with his general abstraction, the power of which is best illustrated by the universal Turing machine, which is nothing more than a Turing machine that interprets both the action table and the I/O tape of any other (fixed-function) Turing machine. Only the UTM is Turing-complete, and this is the model for the instruction sets and programming languages of digital computers, and the processes of cognition according to CTM; this is what we mean by "computation" in those contexts.
This is too restrictive, though. First of all, no real-world computer is a UTM; every computer that can actually physically be built is equivalent to a finite state machine, and has a solvable halting problem by simply iterating through all possible states. Anchoring computation to that notion just means nothing ever computes.

But you might hold that it's enough to have an in-principle Turing complete instruction set---a device that could perform arbitrary computations, provided we keep augmenting it with all the storage it might need during its operation. Even here, though, I think that's too strict. For consider two systems, A and B, where A is some special-purpose computer implementing some function, and B is a general-purpose computer set to simulate A. A definition of computation as above would entail that the latter computes, while the former doesn't---even though they're performing a functionally identical task.

Suppose now you fit B with a device that blows it up once it strays from its task of simulating A. Now, you've effectively robbed B of its universal computing capacity---it can only compute the function A computes; if anything else is computed, it's blown up. Is the device now merely a special-purpose computer itself? That would mean that the addition of completely inert parts could fundamentally change the computational character of a system---since as long as B sticks to simulating A, nothing happens, but still, as it now can only execute one task, simulating A, it wouldn't count as a general-purpose computer anymore, and thus, if only such systems computed, would no longer compute.

But then, no characterization of a system would ever be enough to decide whether it actually computes, as there might always be contingencies that rob it of its computational capacities.
  #572  
Old 07-23-2019, 01:25 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
An additional point about UTMS and the brain: we only need to find one function of the computable functions that the brain can't compute to show that the brain can't be a UTM. The simplest way to do this is choose one of the functions that requires more working memory and state than a human brain can keep track of.

Once we show that the brain isn't a UTM, then based on wolfpups position, no part of the brain is computational because it's all computed by a limited Turing machine.

The way out of this is to not try to claim that only UTMS compute and that CTM assumes a UTM.
  #573  
Old 07-23-2019, 03:23 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by Half Man Half Wit View Post
This is too restrictive, though. First of all, no real-world computer is a UTM; every computer that can actually physically be built is equivalent to a finite state machine, and has a solvable halting problem by simply iterating through all possible states. Anchoring computation to that notion just means nothing ever computes.
Well, duh! That is, of course, a given, and always assumed. There are very few infinities in the real world (with the exception of the car line made by Nissan, and even that is spelled differently!). Even my wondrous new laptop doesn't have infinite capacity!

Quote:
Originally Posted by Half Man Half Wit View Post
But you might hold that it's enough to have an in-principle Turing complete instruction set---a device that could perform arbitrary computations, provided we keep augmenting it with all the storage it might need during its operation.
Yes. And that's a non-trivial point.
Quote:
Originally Posted by Half Man Half Wit View Post
Even here, though, I think that's too strict. For consider two systems, A and B, where A is some special-purpose computer implementing some function, and B is a general-purpose computer set to simulate A. A definition of computation as above would entail that the latter computes, while the former doesn't---even though they're performing a functionally identical task.
You'll have to be much more specific about what you mean by "special-purpose". A graphics card is certainly special-purpose, yet they've been adapted to other uses to exploit their high-performance capabilities, most notoriously for Bitcoin mining. I would guess -- though I don't know for sure -- that I could adapt a typical car's PCM to play chess, if I added a bit of memory and some new code. Indeed, the idea of an Ethernet-like communications bus is now being pushed for new cars, and modern transport aircraft are already essentially a network of digital computers.

"Special-purpose" typically really means "instruction set optimized for a particular application", but the universality is still there. Generally such devices are computational in the important sense of being finite-state machines executing both arithmetical and non-arithmetical syntactical operations on stored symbols directed by a stored program. As a kid I used to write assembly-language programs for the venerable PDP-8, which had a three-bit opcode and hence nominally just 8 instructions, so I am more than intimately familiar with the concept of "there's no instruction for that" but nevertheless that the concept of Turing-complete meant that you could always create a subroutine to implement any imaginable operation. Always.
  #574  
Old 07-23-2019, 05:06 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
"Special-purpose" typically really means "instruction set optimized for a particular application", but the universality is still there.
No. Special purpose generally means a computer that's designed to perform a specific function, i. e. explicitly not a general-purpose computer.
  #575  
Old 07-23-2019, 07:08 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by Half Man Half Wit View Post
No. Special purpose generally means a computer that's designed to perform a specific function, i. e. explicitly not a general-purpose computer.
That's a non-answer. What does that even mean? You speak of some arbitrary "special purpose computer" without any definition of what it is. Is its instruction set Turing complete or is it not?

And I have to re-emphasize my previous response to "But you might hold that it's enough to have an in-principle Turing complete instruction set". That is, indeed, as I said, a non-trivial point, and that's very significant. The instruction set (ignoring addressing limitations) defines the machine architecture and is thus the very definition of its characterization of being Turing complete, whereas it's only the physical machine architecture itself that has physical limitations.
  #576  
Old 07-23-2019, 09:34 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
That's a non-answer. What does that even mean? You speak of some arbitrary "special purpose computer" without any definition of what it is. Is its instruction set Turing complete or is it not?
Special-purpose = doesn't compute everything = non-Turing complete
General-purpose = computes arbitrary functions = Turing-complete

I don't think I can make it any more explicit than that.

Quote:
And I have to re-emphasize my previous response to "But you might hold that it's enough to have an in-principle Turing complete instruction set". That is, indeed, as I said, a non-trivial point, and that's very significant. The instruction set (ignoring addressing limitations) defines the machine architecture and is thus the very definition of its characterization of being Turing complete, whereas it's only the physical machine architecture itself that has physical limitations.
The idea that the instruction set should be Turing complete for something to be called a 'computation' or a 'proper computer' is, at least, quite odd. SQL's instruction set isn't Turing complete (without CTE), but one would typically consider a computer performing SQL queries to compute. Same goes for HTML.

And of course the idea incurs the counterfactual explosion-problem I've described above. Think about something like Babbage's Difference Engine. If it's the case that only universal devices compute, it doesn't compute, since it's not universal. But one might envision an extension of it, the Difference Engine++, such that it becomes able to perform universal computations. Such an extension might just come in the form of additional gears and wheels, that are only used on computations that the bare Difference Engine could not perform.

Nevertheless, since the Difference Engine++ is now computationally universal, it's either the case that it now computes in the proper sense---even if it only performs the very same operations it did when it was merely the Difference Engine (that is, if the ++ part never is used). Or, it only computes when the additional machinery is used---but then, you have the curious issue that only some operations of a universal machine are computations, and in general, you won't be able to tell which ones.

And of course, it's just in conflict with how the term 'computation' is generally used. Finite state machines, for example, are generally considered a model of computation, but trivially can never be computationally universal.
  #577  
Old 07-23-2019, 10:49 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by Half Man Half Wit View Post
The idea that the instruction set should be Turing complete for something to be called a 'computation' or a 'proper computer' is, at least, quite odd. SQL's instruction set isn't Turing complete (without CTE), but one would typically consider a computer performing SQL queries to compute. Same goes for HTML.
Of course one would consider it computation, but your logic here is bewildering. SQL and HTML are layered on -- and rely on -- Turing-complete computational infrastructures without which they would be useless. There's no such thing as an "HTML machine", for example; there are web servers and browsers that are all written in Turing-complete programming languages that communicate via a standardized markup language. The execution of SQL statements involves no end of computational primitives like test-and-branch instructions.

I have yet to find an example of "computation" in the sense in which it's meant in CTM (as set out in the Stanford Encyclopedia of Philosophy, for example) that is different from what I just mentioned upthread, that in this context it's defined "in the important sense of being finite-state machines executing both arithmetical and non-arithmetical syntactical operations on stored symbols directed by a stored program".
  #578  
Old 07-23-2019, 11:29 AM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,863
Quote:
Originally Posted by RaftPeople View Post
A turing machine is just a model to reason about computation. It's just a theoretical machine that performs syntactic operations on symbols.

A universal turing machine has the capability to simulate all other turing machines, which means it can compute every computable function.

A turing machine that isn't a universal turing machine can't compute all computable functions, it can only compute the functions it was designed to compute (like a calculator).



Summary:
There is a distinction between a turing machine and a universal turing machine.
Ah, yes, you're right. I had a gap in my memory. Universal Turing machines have two inputs (the program and the tape), and concrete Turing machines have one input (the tape). Thank you for the correction.

Of course this also means that a rock isn't a Turing machine - unless you stretch the concepts of "data tape" and "state machine" so far as to include all arrangements of matter and all physical interactions (respectively), which would probably be problematic or persons claiming that you can have physical objects that aren't doing computations.

I now return you to your regularly scheduled discussion.
  #579  
Old 07-23-2019, 12:25 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
"in the important sense of being finite-state machines executing both arithmetical and non-arithmetical syntactical operations on stored symbols directed by a stored program".
This is how a calculator operates, so why doesn't a calculator compute?

Is it because the program that is loaded into a calculator and executed by it's processor limits the set of functions that can be computed?

If I begin executing a program on my PC that takes complete control of the PC so the OS is no longer part of the picture, and this program is just a calculator program, did I just render my computer as a system that no longer performs computations?
  #580  
Old 07-23-2019, 12:31 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by begbert2 View Post
Ah, yes, you're right. I had a gap in my memory. Universal Turing machines have two inputs (the program and the tape), and concrete Turing machines have one input (the tape). Thank you for the correction.

Of course this also means that a rock isn't a Turing machine - unless you stretch the concepts of "data tape" and "state machine" so far as to include all arrangements of matter and all physical interactions (respectively), which would probably be problematic or persons claiming that you can have physical objects that aren't doing computations.

I now return you to your regularly scheduled discussion.
I agree that considering a rock as a computer seems to miss an important element, but like most of this stuff, drawing a clean boundary is tricky.

Either way, I think wolfpup's position is not a strong one because it eliminates all of the specially built circuits that perform computations. Let's pretend that the brain has circuits specifically built for performing numeric operations, so it can compute a large number of functions, but it's still limited in what it can compute. wolfpup discards these as non-computational, but there doesn't seem to be any value in doing that.
  #581  
Old 07-23-2019, 01:20 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,863
Quote:
Originally Posted by RaftPeople View Post
I agree that considering a rock as a computer seems to miss an important element, but like most of this stuff, drawing a clean boundary is tricky.

Either way, I think wolfpup's position is not a strong one because it eliminates all of the specially built circuits that perform computations. Let's pretend that the brain has circuits specifically built for performing numeric operations, so it can compute a large number of functions, but it's still limited in what it can compute. wolfpup discards these as non-computational, but there doesn't seem to be any value in doing that.
It occurs to me to question whether it's useful at all to pursue an argument that asserts that purpose-built components (like math coprocessors) don't do "computation". Because the question isn't really whether cognition can be implemented by computation; it's whether cognition can be implemented by computers. And computers can absolutely include purpose-built components.

(I'm also highly dubious about claims that purpose-built components could exist that cannot be wholly emulated by a computer simulation, though I suppose some allowance regarding devices that siphon randomity off of decaying atoms should be paid lip service.)
  #582  
Old 07-23-2019, 02:20 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by begbert2 View Post
It occurs to me to question whether it's useful at all to pursue an argument that asserts that purpose-built components (like math coprocessors) don't do "computation". Because the question isn't really whether cognition can be implemented by computation; it's whether cognition can be implemented by computers. And computers can absolutely include purpose-built components.
Exactly, there seems to be no value in that odd position, which is probably why I can't find any support that kind of thinking. I've been googling but every philosopher, neuroscientist and computer scientist that I find seems to not discard computations just because they were performed by a limited turing machine.

There is certainly value in being clear that CTM also includes the type of general purpose computation in which a learned algorithm can be applied to arbitrary sets of symbolic input, but that position doesn't require discarding the computations performed by more limited machinery.
  #583  
Old 07-24-2019, 12:04 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by begbert2 View Post
It occurs to me to question whether it's useful at all to pursue an argument that asserts that purpose-built components (like math coprocessors) don't do "computation". Because the question isn't really whether cognition can be implemented by computation; it's whether cognition can be implemented by computers. And computers can absolutely include purpose-built components.
First of all, whose question are we talking about here? The question that started this ridiculous digression was what was meant by the word "computational" in "computational theory of mind", and my point was that what is meant is computing in the sense of a stored-program digital computer, and hence the references to the Turing model when these theories are described. To the extent that CTM theories apply -- and no one claims that they apply to everything about the mind -- the principle of multiple instantiation tells us that such cognitive processes are indeed reproducible on digital computers.

I have not claimed that coprocessors don't do "computation" in certain meaningful senses of the word, but one would correctly conclude from my comments that most such coprocessors are not Turing complete. I mentioned earlier that the humble PDP-8 with its eight discrete instructions was, like any general-purpose computer, Turing complete (in the restricted sense of having finite memory, so more precisely a linear bounded automaton). But a coprocessor for it like the Extended Arithmetic Element (EAE) which added multiply and divide instructions was not, though it certainly performed calculations.

But if you had the budget to get a floating point processor (FPP) for your PDP-8, you were dealing with something quite different. The FPP not only added floating point instructions, it also sought to overcome the limitations of the basic PDP-8 instruction set by adding a whole host of new general-purpose instructions in a new double-word format with greatly expanded directly addressable address space, index registers, and a variety of double-word test and branch instructions. It was sufficiently complete that one could write any arbitrary program in the FPP instruction set alone, and indeed this was the reason that when a full-fledged implementation of FORTRAN IV became available for the PDP-8 (a language that itself is of course Turing complete) the FPP was a prerequisite, because the entire output of the compiler was in the FPP instruction set (that prerequisite was later dropped, but only because someone wrote an FPP interpreter -- which incidentally is a fine illustration of the fact that the humble PDP-8 with its eight instructions was Turing complete).

Thus, the FPP coprocessor was itself Turing complete -- essentially a co-equal parallel processor -- but the EAE very clearly was not, even though one could argue that "it performs syntactic operations on symbols". The FPP was a computer in its own right. The EAE, with its implementation of MUY and DIV instructions, was a kind of calculator add-on. When the FPP was in control, the program counter reflected the flow and branching of the FPP double-word instruction set. The EAE was just a dumb calculator effectively bolted on to the PDP-8 to make certain calculations faster.

I trust this clarifies the distinction I was trying to make.

Last edited by wolfpup; 07-24-2019 at 12:07 AM.
  #584  
Old 07-24-2019, 12:23 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,863
Quote:
Originally Posted by wolfpup View Post
First of all, whose question are we talking about here? The question that started this ridiculous digression was what was meant by the word "computational" in "computational theory of mind", and my point was that what is meant is computing in the sense of a stored-program digital computer, and hence the references to the Turing model when these theories are described. To the extent that CTM theories apply -- and no one claims that they apply to everything about the mind -- the principle of multiple instantiation tells us that such cognitive processes are indeed reproducible on digital computers.

I have not claimed that coprocessors don't do "computation" in certain meaningful senses of the word, but one would correctly conclude from my comments that most such coprocessors are not Turing complete.

<snippity snip snip>

I trust this clarifies the distinction I was trying to make.
That was interesting technical/historical stuff, in that section I snipped out for brevity!

It's definitely the case that this ridiculous digression was based on the difficulty in defining "computation". Heck, most of the last 500 posts in this thread are due to the difficulty in defining "computation". In my previous sojourn here three weeks ago the impression I was getting was that (philosophically speaking) the term has nothing at all to do with what the object in question is doing, but rather was entirely based on whether some outside observer is choosing to interpret the observable outcomes of the object as being computational or not! Which is, of course, just silly.

Here's the thing about Turing machines and Turing completeness: You don't have to be a Turing machine to be Turing compete. And in fact computers aren't Turing machines; a Turing machine has a read/write head that runs back and forth on a tape, reading and doing stuff to the spot its head is looking at. Computers aren't built like that - and neither are brains. Nothing is built like that. There are no Turing machines in use in reality that I'm aware of.

This puts non-universal Turing machines in a weird position.

When we realize that we're not talking about literal Turing machines, but rather stuff that just can do the things Turing machines do, then applying the same transformation to non-universal Turing machines means that you're now just talking about anything at all that can replicate the behavior some specific concrete Turing machine. Any specific concrete Turing machine. And there are Turing machines that do nothing at all. So that means that a rock becomes a contender for "something that can emulate a specific concrete Turing machine".

If the definition of "computation" is "has effects comparable to any single non-universal Turing machines", then damn near everything is doing computation - once you take the tape and reader head away a Turing machine just becomes something that has states, alters its state based on external forces in a deterministic way, and can effect other things depending on its state in a deterministic way. If you consider "current position" to be part of a thing's state, then pretty much everything can be described as interacting with the world based entirely dependent on its state, and thus pretty much everything would be computational. About the only exception would be things that are truly random - these things would be 'outputting' based on something that couldn't be part of their state (since that would be deterministic behavior by definition). Note: I don't believe in true randomity.

For the record, I'm not sure that this is actually a bad way to define "computation" as it relates to the computational theory of the mind. A computational theory of everything would certainly include minds too! And this is actually the position that the classic "we can simulate minds by simulating all of reality in excruciating detail" argument is taking: that the laws of physics determine the behavior of reality when the various parts of reality are interacting with each other in their various possible states, and thus it's computational in a way such that all its effects can be emulated by a sufficiently complicated program running on a Turing complete system.

At least a few of the people disagreeing with CTM appear to be directly arguing against this approach - they're saying that the brain does something non-computational which computers can't replicate. Presumably they're using a definition of "computation" that's less inclusive than "interacts with stuff deterministically", but I don't know what that definition would be.
  #585  
Old 07-24-2019, 06:14 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by begbert2 View Post
Here's the thing about Turing machines and Turing completeness: You don't have to be a Turing machine to be Turing compete. And in fact computers aren't Turing machines; a Turing machine has a read/write head that runs back and forth on a tape, reading and doing stuff to the spot its head is looking at. Computers aren't built like that - and neither are brains. Nothing is built like that. There are no Turing machines in use in reality that I'm aware of.
Well of course a digital computer doesn't literally operate like a Turing machine, but that isn't really the point. The key idea here, to get down to the basic fundamentals, is that the Turing machine was conceived as an abstraction that has the following key property: it can compute anything that is computable. That's it. Everything else follows from that, including the conceptualization of the universal Turing machine.

One can get into some pretty silly quandaries, however, if one assumes the converse -- that anything that a Turing machine does must be regarded as a computation, because a Turing machine can be defined to do nothing at all, or to just read the first symbol on the tape and halt. Thus Raftpeople's assertion that "a calculator is a Turing machine" is in the category of "not even wrong", it's just simply meaningless, and trying to justify it by saying that a calculator "performs syntactic operations on symbols" is just incoherent nonsense.

The important principle here is that it can be shown that certain devices, like a general-purpose digital computer, can perform exactly the same symbolic operations as a Turing machine, and so we can conclude that, within the limits of time and memory capacity, such a device is a restricted form of Turing machine that can also (within those limits) compute anything that is computable. This is a profound observation with foundational implications for both the entire field of computer science and for much of cognitive science. It ultimately has implications about the fundamental nature of intelligence and the ability to instantiate it on different physical substrates. The Turing model allows us to establish that there is a class of such equivalent devices, which includes digital computers and, according to CTM, the cognitive functions of the human mind. The common property of such Turing-equivalent devices is what I mean by "computational" in the context of this discussion. It should be clear by now why it does not include devices like calculators, or a random collection of logic gates, or the EAE add-on to the PDP-8 that I was reminiscing about.
  #586  
Old 07-24-2019, 06:37 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,863
Quote:
Originally Posted by wolfpup View Post
Well of course a digital computer doesn't literally operate like a Turing machine, but that isn't really the point. The key idea here, to get down to the basic fundamentals, is that the Turing machine was conceived as an abstraction that has the following key property: it can compute anything that is computable. That's it. Everything else follows from that, including the conceptualization of the universal Turing machine.

One can get into some pretty silly quandaries, however, if one assumes the converse -- that anything that a Turing machine does must be regarded as a computation, because a Turing machine can be defined to do nothing at all, or to just read the first symbol on the tape and halt. Thus Raftpeople's assertion that "a calculator is a Turing machine" is in the category of "not even wrong", it's just simply meaningless, and trying to justify it by saying that a calculator "performs syntactic operations on symbols" is just incoherent nonsense.

The important principle here is that it can be shown that certain devices, like a general-purpose digital computer, can perform exactly the same symbolic operations as a Turing machine, and so we can conclude that, within the limits of time and memory capacity, such a device is a restricted form of Turing machine that can also (within those limits) compute anything that is computable. This is a profound observation with foundational implications for both the entire field of computer science and for much of cognitive science. It ultimately has implications about the fundamental nature of intelligence and the ability to instantiate it on different physical substrates. The Turing model allows us to establish that there is a class of such equivalent devices, which includes digital computers and, according to CTM, the cognitive functions of the human mind. The common property of such Turing-equivalent devices is what I mean by "computational" in the context of this discussion. It should be clear by now why it does not include devices like calculators, or a random collection of logic gates, or the EAE add-on to the PDP-8 that I was reminiscing about.
So in your eyes "computational" is synonymous with "Turing complete", essentially?

It occurs to me that even if the brain is not computational (Turing complete), that fact is not evidence that the brain can't be wholly emulated on a computer. Ten-key calculators can be wholly emulated on computers, after all.
  #587  
Old 07-24-2019, 09:38 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
Thus Raftpeople's assertion that "a calculator is a Turing machine" is in the category of "not even wrong", it's just simply meaningless,
Then maybe you can clarify why a PC computes but a calculator doesn't compute.

They both have general purpose processors, memory and are running programs. The calculator happens to be running a program that was loaded into ROM.

Is the issue that it is executing a program that is in ROM?

If a PC was running a calculator program (only, no OS), would the PC stop being computational?
  #588  
Old 07-25-2019, 12:36 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
You know, when a sentence begins with "Thus ..." it's generally a clue that it's the conclusion of an explanation that precedes it. I find it hard to believe that you genuinely don't understand this point after it was so clearly explained in that post. I suggest that you go back and read it carefully this time, and also read the last paragraph again.
  #589  
Old 07-25-2019, 12:49 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Wolfpup, any computation performed by a real brain can be performed by an appropriate non-Turing complete FSM, simply due to the fact that the brain's lifetime is finite. Would there be any cognitive difference between an entity governed by the (presumably Turing-complete) brain and the entity governed by the FSM?

Last edited by Half Man Half Wit; 07-25-2019 at 12:49 AM.
  #590  
Old 07-25-2019, 01:52 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by Half Man Half Wit View Post
Wolfpup, any computation performed by a real brain can be performed by an appropriate non-Turing complete FSM, simply due to the fact that the brain's lifetime is finite. Would there be any cognitive difference between an entity governed by the (presumably Turing-complete) brain and the entity governed by the FSM?
My immediate understanding of this rather perplexing question prompts the answer, no, but so what? I note that you loosely throw around terms like "an appropriate non-Turing complete FSM", which, like the concept of an arbitrary Turing machine, can be arbitrarily trivial. Which is the same fallacy that RaftPeople went off on with his calculator example. What we generally mean by Turing equivalence in the real world is not just some "appropriate" FSM, but the abstraction of a linear bounded automaton. I think I laid out my basic thesis pretty clearly in post #585. At this point I may as well directly address the challenge that RaftPeople posed since he's probably just going to come back with more gotcha question attempts. "If a PC was running a calculator program (only, no OS), would the PC stop being computational?" How about a computer programmed so that the only thing it does is respond to any input by halting? IOW, it does nothing. Does that device "compute"?

This of course completely misses the whole point (again, #585). This is the same fallacy as the calculator example earlier, and he will continue to misunderstand this issue as long as he thinks of Turing-equivalent computationalism in terms of being "a thing that a device is doing" instead of what it really is: a generalized capability that a device has, namely the capability to perform any computation that it's possible to specify. This was Turing's insight, and the one that's been adapted into the model of the cognitive mind implicit in CTM.
  #591  
Old 07-25-2019, 05:27 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
My immediate understanding of this rather perplexing question prompts the answer, no, but so what? I note that you loosely throw around terms like "an appropriate non-Turing complete FSM", which, like the concept of an arbitrary Turing machine, can be arbitrarily trivial.
The point is that during its finite lifetime, a brain, even though it is 'in principle' universal in the sense that it could carry out arbitrary computations if equipped with sufficient resources, can only actually implement a limited subset of computations. There then exists a non-Turing universal system that can only implement those functions. Replacing a brain with that system then will yield a functional and behavioral duplicate of the original entity.

Now, there are two options: either, the system is also a cognitive duplicate---will have the same thoughts, beliefs, and the like. Then, the requirement of computational universality you seem to want to impose is just a red herring.

Or, the system won't be a cognitive duplicate. Then, you'll have the odd situation that there may be systems that talk, act, and behave like they are cognitively human-like creatures, but won't be---a kind of zombie problem.

I think most---including you---would reject the second horn of this dilemma. But then, that's where our puzzlement at your requirement of universality for 'properly computing' systems stems from.

Furthermore, the sort of distinction you're drawing is just profoundly odd from a computational standpoint. If cognition is akin to some computation, then whether that computation is performed on a universal or a special-purpose computer should not have any influence on whether it's cognition properly so-called---not anymore than the same calculation performed on a universal system versus a simple calculator are in any way different sorts of things.

Last edited by Half Man Half Wit; 07-25-2019 at 05:28 AM.
  #592  
Old 07-25-2019, 10:22 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,685
Quote:
Originally Posted by wolfpup View Post
This is the same fallacy as the calculator example earlier, and he will continue to misunderstand this issue as long as he thinks of Turing-equivalent computationalism in terms of being "a thing that a device is doing" instead of what it really is: a generalized capability that a device has, namely the capability to perform any computation that it's possible to specify.
The calculator has the same underlying capabilities as a personal computer but with less memory.

So help me understand why a calculator doesn't compute but a PC does compute.

Is it because we loaded the program into ROM?

Maybe you just didn't realize that calculators have turing complete processors and are programmed with languages like C. If so, just state that and let's move on. If not, please explain because I really don't understand why computer A doesn't compute but computer B does compute.



Although the issue that HMHW describes is the original and primary issue, this issue about a calculator not computing even though it's the same as a computer is a valid point because you seem to be stating that even a turing complete machine loses it's ability to compute under specific conditions.

The follow up would be to make sure the brain doesn't hit the same types of conditions. For example, if the issue is that the program is loaded from ROM and the calculator can't escape that programming, how do we know that the brain doesn't do the same thing? When you've learned an algorithm and have used it for decades and then someone tries to get you to do it a different way but you can't at first, are you not computing when that condition arises?
  #593  
Old 07-25-2019, 04:33 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by Half Man Half Wit View Post
The point is that during its finite lifetime, a brain, even though it is 'in principle' universal in the sense that it could carry out arbitrary computations if equipped with sufficient resources, can only actually implement a limited subset of computations. There then exists a non-Turing universal system that can only implement those functions. Replacing a brain with that system then will yield a functional and behavioral duplicate of the original entity.

Now, there are two options: either, the system is also a cognitive duplicate---will have the same thoughts, beliefs, and the like. Then, the requirement of computational universality you seem to want to impose is just a red herring.

Or, the system won't be a cognitive duplicate. Then, you'll have the odd situation that there may be systems that talk, act, and behave like they are cognitively human-like creatures, but won't be---a kind of zombie problem.

I think most---including you---would reject the second horn of this dilemma. But then, that's where our puzzlement at your requirement of universality for 'properly computing' systems stems from.

Furthermore, the sort of distinction you're drawing is just profoundly odd from a computational standpoint. If cognition is akin to some computation, then whether that computation is performed on a universal or a special-purpose computer should not have any influence on whether it's cognition properly so-called---not anymore than the same calculation performed on a universal system versus a simple calculator are in any way different sorts of things.
The problem with that reasoning is that "non-Turing universal" (non-Turing complete) doesn't really define anything because, as we have seen, it can be arbitrarily trivial. I agree with you that option #2 would not be a cognitive duplicate, since some special-purpose system designed to mimic particular behaviors would likely, among other things, fail to evolve in response to new stimuli as a human would. But I think your logic is flawed with regard to the first option, because you jump from "can only actually implement a limited subset of computations" to "there then exists a non-Turing universal system that can only implement those functions" which is many steps too far. The limitations of a physical system do not reduce it to some arbitrary non-Turing complete status; rather, they reduce it to the very specific status of a restricted Turing machine like a real computer, a linear bounded automaton, basically equivalent to a UTM with a bounded tape. The former is essentially undefined, while the latter defines a stored-program digital computer, and this is necessarily the model for the "computational" element of CTM. The two things are very substantially different.

Earlier I mentioned the Turing-complete PDP-8 computer (again, technically a restricted Turing machine) with just 8 instructions. I think one could get that down to just 4 or 5 instructions and still retain all the necessary prerequisites for Turing completeness. (Turing completeness is actually a pretty low bar; Wolfram showed that a Turing machine with just 2 states and 5 symbols could be universal, and controversially, so could one with 2 states and just 3 symbols.) The interesting question with respect to the general problem of cognition, or intelligence if you will, is what happens if one takes it down further, so that Turing completeness is lost. I would posit -- and this is only my conjecture -- that no matter how many interesting advanced instructions you added, lack of Turing completeness would preclude the kind of analytical and decision-making power that we associate with true intelligence. It would certainly remove it from equivalence with all machines we know of at present that (at least arguably) exhibit such intelligence.
Quote:
Originally Posted by RaftPeople View Post
Maybe you just didn't realize that calculators have turing complete processors and are programmed with languages like C. If so, just state that and let's move on. If not, please explain because I really don't understand why computer A doesn't compute but computer B does compute.
And you think this is for some reason relevant, why? I probably have embedded processors in half my kitchen appliances. There may be one in my doorbell, for all I know. Whether a manufacturer chooses to build a calculator out of an embedded microprocessor instead of discrete logic gates or mechanical gears, or maybe springs and elastics, is totally immaterial to the discussion. What matters is the functionality that I have access to, regardless of how it's implemented. You still don't seem to have grasped what this conversation is about, and frankly I find your condescending attitude less than conducive to a productive conversation, so I won't be responding any further.
  #594  
Old 07-25-2019, 05:06 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,863
Quote:
Originally Posted by wolfpup View Post
And you think this is for some reason relevant, why? I probably have embedded processors in half my kitchen appliances. There may be one in my doorbell, for all I know. Whether a manufacturer chooses to build a calculator out of an embedded microprocessor instead of discrete logic gates or mechanical gears, or maybe springs and elastics, is totally immaterial to the discussion. What matters is the functionality that I have access to, regardless of how it's implemented.
Wait just a tick - whether it's computing or not depends on your access? Are you saying that if I have a computer running that you're not currently typing a new computer program into, then it's computing, but if I lock the keyboard in a case and thus make you unable to alter the program, then it no longer is? Because that's sure what it sounds like you're saying.

Which of these is computing?

1) A calculator built in a non-Turing-complete way. Inputs wired directly into the logic and from their to the outputs with no possible way to use the inputs for anything else. (Until you turn the calculator over, turn 58008 (f) into BOOBS (f'), and the world explodes.)

2) A full-fledged Windows 10 PC that somebody is running Calculator on, and choosing not to interact with any other part of the machine or desktop applications other than the calculator app.

3) A full-fledged Windows 10 PC that somebody has jiggered to ONLY run the Calculator app, ignoring all other inputs and clicks anywhere else.

4) A full-fledged Windows 10 PC that somebody has rigged to only run Calculator, and which has been altered to only listen to the numeric keypad for input and which only outputs to a small, LCD-like pane on the screen.

5) A handheld calculator that is running a full-fledged copy of Windows 10, which only runs an altered version of Calcuator that takes its input from the calculator's keypad and which only outputs to the calculator's LCD screen.


Which of these, by number, are doing computation in your opinion, and which are not?
  #595  
Old 07-25-2019, 06:28 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by begbert2 View Post
Wait just a tick - whether it's computing or not depends on your access?
Yes. More precisely, whether or not a device is Turing complete depends -- very obviously -- on whether one can use it as such to perform any arbitrary computation. If a system has some internal component that might intrinsically be Turing complete but which I cannot utilize in that fashion, because it's locked into some fixed function, then the system is not Turing complete. Indeed, a Turing machine with a specific tape and fixed action table isn't Turing complete either.

Just like some doorbell that might ring different tones at different times of day due to an embedded microprocessor is, in fact, just a doorbell and not a UTM. All the embedded microprocessors in various devices may indeed be Turing complete by my earlier description of that being an intrinsic property of the chip, but unless that property is exposed to a usable interface, it may as well not exist. It tells us nothing about the properties of the system it's embedded in. One cannot conclude that an appliance is Turing complete just because it has a microprocessor in it, when that microprocessor may do nothing more than run a timer and flip a relay.

Quote:
Originally Posted by begbert2 View Post
Which of these is computing?
Remember that we're not talking here about "computing" in the colloquial sense, but in the Turing-complete sense. Adhering strictly to the guidelines that you set out, all of those are calculators, because that's all they do. So you appear to have already answered your own question:
Quote:
Originally Posted by begbert2 View Post
For the record a calculator is also not a Turing machine.
  #596  
Old 07-25-2019, 07:09 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,863
Reordering slightly:
Quote:
Originally Posted by wolfpup View Post
So you appear to have already answered your own question:
Whoa there - I had assumed that we were specifically discussing a hard-wired, not-and-never-and-no-part-of-it-is-turing-complete device. A simplistic ten-key specifically designed not to include a turing-complete processor.

If I'd know you were going to interpret that as saying that the minute I deign to run the Calculator program on my PC it stops computing because something that looks like a calculator has appeared on the screen, then I would never have said that, because that's insane.

Quote:
Originally Posted by wolfpup View Post
Yes. More precisely, whether or not a device is Turing complete depends -- very obviously -- on whether one can use it as such to perform any arbitrary computation. If a system has some internal component that might intrinsically be Turing complete but which I cannot utilize in that fashion, because it's locked into some fixed function, then the system is not Turing complete. Indeed, a Turing machine with a specific tape and fixed action table isn't Turing complete either.

Just like some doorbell that might ring different tones at different times of day due to an embedded microprocessor is, in fact, just a doorbell and not a UTM. All the embedded microprocessors in various devices may indeed be Turing complete by my earlier description of that being an intrinsic property of the chip, but unless that property is exposed to a usable interface, it may as well not exist. It tells us nothing about the properties of the system it's embedded in. One cannot conclude that an appliance is Turing complete just because it has a microprocessor in it, when that microprocessor may do nothing more than run a timer and flip a relay.

Remember that we're not talking here about "computing" in the colloquial sense, but in the Turing-complete sense. Adhering strictly to the guidelines that you set out, all of those are calculators, because that's all they do.
You do realize that option 2 was about a normal, unmodified, full-fledged computer, right? Just one that I happen to be using to run a calculator program on because I want to run a calculator program. You are literally saying that no computer is Turing complete the moment anybody uses a computer for literally anything, because that that point it's only emulating one Turing machine, not all of them simultaneously.

Your logic also dictates that computers are only Turing complete while a person is using them - if I walk away from a computer then there is no way to for it to get various variable instructions, because an important input component -the nut behind the wheel- is missing. Thus computers cease to compute -cease to be Turing complete- the moment anyone looks away.

They also must stop being Turing complete during the lulls between one keystroke and the next and between that keystroke and the one after, because during those periods there is no input and thus no Turing completeness, according to what you're saying.

Um, yeah. Not really feeling a consensus with you here.



Here's my take on this - a device is either Turing complete, or it's not. It either has the capability to emulate any Turing machine, or it doesn't. And this doesn't change if the device is locked in a room away from users, of it's installed inside another box that only makes limited use of it. The component, itself, remains turing complete, regardless of whether wolfpup can access all its functions. And if computation is defined as "something a Turing complete device does", then computation happens when such a device does its thing.

Seriously, there are millions of computers in the world that go for long periods with no human accessing them directly, and which drastically limit even the digital input they'll accept - they're called "servers". The SDMB runs on a machine that you're not allowed to log into and run Halo on; does that mean it's not Turing complete?
  #597  
Old 07-25-2019, 08:33 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by begbert2 View Post
Reordering slightly:
Whoa there - I had assumed that we were specifically discussing a hard-wired, not-and-never-and-no-part-of-it-is-turing-complete device. A simplistic ten-key specifically designed not to include a turing-complete processor.
What's the difference? I think you're misunderstanding what I mean by "access", and admittedly my first sentence wasn't very clear. The principle that can be elucidated here might be stated as follows: the computational properties of an embedded system are not necessarily exposed to the system in which they're embedded. The only functionality provided by the system is that which is exposed by the fixed code running in the embedded system(s). In simple terms, a basic ten-key calculator as in your example may or may not be built with a CPU microchip, but if it is, the functionality of the closed system accessible to the user is exactly the same as any other way it might have been built. It matters not a whit if the microchip is a Turing-complete CPU running a program or whether it's built out of mechanical or discrete logical components. In what way could it possibly matter? Does the fact that the microchip is running a program written in C allow you to write some arbitrary program in C, too? If it does, then the device is no longer just a calculator.

To reiterate the basic point yet again in the most basic possible terms, it's that cognition is believed to work in a manner analogous to that of a digital computer with a Turing-complete instruction set running a program that operates on a set of symbolic data. A calculator doesn't do that, regardless of what it's built on, and regardless of what its internal components may be doing.

Quote:
Originally Posted by begbert2 View Post
You do realize that option 2 was about a normal, unmodified, full-fledged computer, right?
Yep, but I'm taking you at your word that the user is "choosing not to interact with any other part of the machine or desktop applications other than the calculator app" -- and that this condition holds forever, because if the user ever does, then the scenario is no longer valid. Do you see how there is no distinction whatsoever between that scenario and the one you just described as "a hard-wired, not-and-never-and-no-part-of-it-is-turing-complete device. A simplistic ten-key specifically designed not to include a turing-complete processor"? If there is a functional distinction, please explain what it is.

To make it even more clear, a computing platform that is permanently locked into acting as a LISP interpreter offers a capability that is Turing complete, but the same platform that is permanently locked into acting as a calculator does not; nor does one whose sole dedicated function is to cause my doorbell to ring or my oven to go on.
  #598  
Old 07-25-2019, 11:51 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
But I think your logic is flawed with regard to the first option, because you jump from "can only actually implement a limited subset of computations" to "there then exists a non-Turing universal system that can only implement those functions" which is many steps too far.
No, it's an elementary fact of computability theory. For every linear bounded automaton you can find an equivalent finite state machine---mathematically, one puts this as DSPACE(O(1)) = REG, where DSPACE is the class of computations that can be performed with a specific memory requirement, O(1) denotes this to be constant (making DSPACE(O(1)) the class of LBAs), and REG is the class of regular languages, which are just the languages recognized by an FSA. FSAs are strictly weaker than Turing machines in terms of computation.

Of course, this should be immediately intuitive; after all, any LBA has a finite number of possible states (tape + head configurations), and thus, an equivalent FSA exists just by taking this state space and fitting it out with appropriate state transition rules. It's of course well recognized that the brain is just such a machine:
Quote:
The argument is as follows.
Any (realistic) network consists of a finite number of neurons or nodes, and each node has a finite number of distinguishable states. Therefore, with a finite amount of elements and a finite set of states for each element, the network has a finite-state space (i. e., the collection of all its
states), that is, it is a finite-state machine.


Also, as pointed out above, any argument that cognition is only properly so-called if the underlying system could implement arbitrary computations if it is extended in the proper way runs into difficulties with the counterfactual nature of this extension. Essentially, the relevance of the presence of capabilities that may never be used (indeed, can never be used in finite time) means that two systems performing identical tasks---being functionally identical---might be cognitively different, simply because one system is extensible in this way, while the other isn't (say, by blowing up whenever one tries to use the extended capabilities).
  #599  
Old 07-26-2019, 01:35 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,822
Quote:
Originally Posted by Half Man Half Wit View Post
Of course, this should be immediately intuitive; after all, any LBA has a finite number of possible states (tape + head configurations), and thus, an equivalent FSA exists just by taking this state space and fitting it out with appropriate state transition rules. It's of course well recognized that the brain is just such a machine:
Indeed. And so I don't see how this puts your argument any farther ahead. To begin with, no one would argue that any physical system (whether computer or brain) was Turing complete in the literal sense of having infinite capacity, and when "Turing complete" is used as a shorthand expression to describe a physical system, it's always understood to refer to a linear bounded automaton, as I previously said.

What I think I misunderstood in your argument was the idea that case #2 ("the system won't be a cognitive duplicate") was represented by some arbitrarily limited non-Turing-universal system that would appear to mimic predefined cognitive functions, but could not evolve new ones as an actual brain could. I spelled out that assumption.

But if you want to make the point that the cognitive mind, as an LBA, can be represented as a functionally equivalent finite state machine, we seem to agree that this is a cognitively equivalent duplicate, but I fail to see how it doesn't also retain all the computational qualities of the original that I claim. After all, "a linear bounded automaton (LBA) is an abstract machine that would be identical to a Turing machine, except that during a computation with given input its tape-head is not allowed to move outside a bounded region of its infinite tape."

Your argument seems closely analogous to an argument that could be made about any digital computer; while acknowledging that its instruction set is indeed Turing complete, its finite memory and finite time could be argued to mean that it can only perform a finite subset of those computations. Therefore one could (in theory) define some less powerful non-Turing, non-LBA, and indeed non-computational paradigm that performs all those same functions, like a humongous lookup table -- a discussion we've had before. This may make for an interesting philosophical rumination, but it detracts nothing from a description of the real physical machine as having a Turing complete instruction set, and that this is a prerequisite to its fundamental capabilities in the real world, which are absent in a machine lacking such universality. So notwithstanding theoretical equivalences to lesser machines, the quality of Turing-complete universality is a critical part of the architecture of any modern general-purpose computer.

And while I'm at it, carrying that thought forward to your other objection -- "whether that computation is performed on a universal or a special-purpose computer should not have any influence on whether it's cognition properly so-called---not anymore than the same calculation performed on a universal system versus a simple calculator are in any way different sorts of things" -- I agree. The distinction is not in any difference in the calculation that is performed, but in the underlying capability set of the machine performing them, that one is a computational device in the Turing sense and the other is not.
  #600  
Old 07-26-2019, 10:52 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
Indeed. And so I don't see how this puts your argument any farther ahead. To begin with, no one would argue that any physical system (whether computer or brain) was Turing complete in the literal sense of having infinite capacity, and when "Turing complete" is used as a shorthand expression to describe a physical system, it's always understood to refer to a linear bounded automaton, as I previously said.
Well, but the point I'm making is that it's not clear why it should matter that the system actually is 'Turing complete' in this short-hand way, if there's an equivalent, non-Turing complete system. Replace one with the other---what changes?

Quote:
But if you want to make the point that the cognitive mind, as an LBA, can be represented as a functionally equivalent finite state machine, we seem to agree that this is a cognitively equivalent duplicate, but I fail to see how it doesn't also retain all the computational qualities of the original that I claim.
It was your claim that only a universal machine, or one that possesses a universal instruction set, was appropriate for the computational theory of mind:
Quote:
Originally Posted by wolfpup View Post
Only the UTM is Turing-complete, and this is the model for the instruction sets and programming languages of digital computers, and the processes of cognition according to CTM; this is what we mean by "computation" in those contexts.
But the FSA-equivalent to a brain isn't Turing-complete, and doesn't possess a Turing-complete instruction set. So you're positing that there is some quality that the brain, as an LBA, has, but which an FSA, as a system not in principle Turing-completable, lacks.

Quote:
Originally Posted by wolfpup View Post
Therefore one could (in theory) define some less powerful non-Turing, non-LBA, and indeed non-computational paradigm that performs all those same functions, like a humongous lookup table -- a discussion we've had before.
Where do you make the jump from non-LBA to non-computational, though? FSAs are a perfectly respectable model of computation, even if they're strictly weaker than Turing machines.

Quote:
This may make for an interesting philosophical rumination, but it detracts nothing from a description of the real physical machine as having a Turing complete instruction set, and that this is a prerequisite to its fundamental capabilities in the real world, which are absent in a machine lacking such universality.
So, again: what's lacking in the FSA that is a complete computational equivalent to the brain's LBA? Why is it that the Turing-complete instruction set is a prerequisite to its 'fundamental capabilities'---and what, exactly are those?

And once more, although I sense that this is going to be one of those points you just keep missing over and over again, what about the example of a system that's identical to a Turing-completable one, but that just would get blown up if it actually were to try and access this extended capabilities---which, however, it never actually does? Would that be a system possessing these 'fundamental capabilities', or not?

Quote:
And while I'm at it, carrying that thought forward to your other objection -- "whether that computation is performed on a universal or a special-purpose computer should not have any influence on whether it's cognition properly so-called---not anymore than the same calculation performed on a universal system versus a simple calculator are in any way different sorts of things" -- I agree. The distinction is not in any difference in the calculation that is performed, but in the underlying capability set of the machine performing them, that one is a computational device in the Turing sense and the other is not.
But if you agree that whether the system is cognitively human-equivalent doesn't depend on whether the program is executed on a universal or non-universal device, then in what way does that distinction actually make itself manifest? What difference does it make? If a mind (stipulating, for the moment, that minds are computational) were transferred from a Turing-completable substrate to a non-Turing completable one, would there be any difference to it?

Last edited by Half Man Half Wit; 07-26-2019 at 10:53 AM.
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 06:19 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright © 2018 STM Reader, LLC.

 
Copyright © 2017