Reply
 
Thread Tools Display Modes
  #601  
Old 07-26-2019, 02:35 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,494
Quote:
Originally Posted by wolfpup View Post
What's the difference? I think you're misunderstanding what I mean by "access", and admittedly my first sentence wasn't very clear. The principle that can be elucidated here might be stated as follows: the computational properties of an embedded system are not necessarily exposed to the system in which they're embedded. The only functionality provided by the system is that which is exposed by the fixed code running in the embedded system(s). In simple terms, a basic ten-key calculator as in your example may or may not be built with a CPU microchip, but if it is, the functionality of the closed system accessible to the user is exactly the same as any other way it might have been built. It matters not a whit if the microchip is a Turing-complete CPU running a program or whether it's built out of mechanical or discrete logical components. In what way could it possibly matter? Does the fact that the microchip is running a program written in C allow you to write some arbitrary program in C, too? If it does, then the device is no longer just a calculator.

To reiterate the basic point yet again in the most basic possible terms, it's that cognition is believed to work in a manner analogous to that of a digital computer with a Turing-complete instruction set running a program that operates on a set of symbolic data. A calculator doesn't do that, regardless of what it's built on, and regardless of what its internal components may be doing.


Yep, but I'm taking you at your word that the user is "choosing not to interact with any other part of the machine or desktop applications other than the calculator app" -- and that this condition holds forever, because if the user ever does, then the scenario is no longer valid. Do you see how there is no distinction whatsoever between that scenario and the one you just described as "a hard-wired, not-and-never-and-no-part-of-it-is-turing-complete device. A simplistic ten-key specifically designed not to include a turing-complete processor"? If there is a functional distinction, please explain what it is.

To make it even more clear, a computing platform that is permanently locked into acting as a LISP interpreter offers a capability that is Turing complete, but the same platform that is permanently locked into acting as a calculator does not; nor does one whose sole dedicated function is to cause my doorbell to ring or my oven to go on.
Just to make a "are we on the same page" check, the subject I'm discussing with you is the definition of the word(s) "computation"/"computational". These words are a categorization of a particular type of behavior - a thing is either doing computation, or it isn't. A thing's behaviors are either computational, or they're not. A thing is either a computer, or it's not.

I'm of the opinion that if a thing is computing, then it's computing. If a thing is Turing complete, it's Turing complete. These terms are objective descriptions about the state and properties of the object, not subjective assessments.

If a device is Turing complete, then that device is Turing complete even if it's locked in a drawer. The fact that you can't get to it and fiddle with its inputs is utterly irrelevant to the device's status as a Turing complete device.


Perhaps it will help you understand my problem with your approach if we avoid something so hard to define as "computing". So instead let's suppose we're debating whether a system is breathing. I'll assert that all living people breathe. With me so far?

Suppose we put a person in a box and let you talk to him through an open window in the box. At this point you'd agree that the system you're interacting includes an element that is breathing, right? You can totally see him doing it, and you can see him acting in ways that no non-breathing (dead) person could.

Now suppose we close the window and tell you that you can now only interact with the box via a switch and a light, mounted on opposite sides of the box. If you turn the switch to the "on" position, then by the time you walk around to the other side of the box the light will be on. If you set the switch to "off", then by the time you walk around to the other side of the box the light will be off. In other words, the behavior of the box no longer requires a person to be inside it, and somebody who didn't know there was a person in there could not prove it by interacting with the box.

My position is that the system still includes a breathing element, because there's still a person in there. Your position appears to be that since you can't prove that contents of the box include a breathing person by its behavior, that any people who happen to be in there aren't breathing!
  #602  
Old 07-27-2019, 03:59 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,182
Quote:
Originally Posted by Half Man Half Wit View Post
So, again: what's lacking in the FSA that is a complete computational equivalent to the brain's LBA? Why is it that the Turing-complete instruction set is a prerequisite to its 'fundamental capabilities'---and what, exactly are those?

And once more, although I sense that this is going to be one of those points you just keep missing over and over again, what about the example of a system that's identical to a Turing-completable one, but that just would get blown up if it actually were to try and access this extended capabilities---which, however, it never actually does? Would that be a system possessing these 'fundamental capabilities', or not?
Let me try to wrap this up in the following way, which I think addresses what this all boils down to. Suppose I write a program in a particular (Turing complete) language, and that after running it for an hour (on a computer with a Turing-complete instruction set (LBA)) it produces the desired results.

Now, at any given time this program and all its data and state information consist of an enormous string of bits in the computer's memory, and its execution trace can be represented as a finite set of state transitions over that time (not Turing complete). It could be drawn as an enormous state transition diagram. So what's the value of having Turing-complete precursors -- the Turing-complete specification language that created it and the Turing-complete platform that executes it?

It's because we don't (except in special cases) intentionally build finite state machines per se, what we build are logical descriptions that reflect the way we think about the world, and the essential function of computers is to instantiate those descriptions in a completely general way. The premise of CTM is that cognition also works in a completely general way using exactly parallel kinds of operations.

As for your challenge in the last paragraph I quoted, I find it strange that you claim I "keep missing" some point whereas ISTM that it is you who is missing it. The premise of some "extended capabilities" in one system versus another when both are performing identical tasks is an obfuscatory fiction. If they are performing identical tasks then they are identical Turing machines. Turing completeness implies no specific functional capability other than what is inherent in any Turing machine, and reflects the nature of a Turing machine to be able to simulate the fixed computation of any other. In the real world, Turing completeness in a real computer is reducible to (I believe) a minimum of four instructions, ignoring the contrived silliness of one-instruction computers. The question is meaningless here, too, because no non-trivial program could run on a computer that lacks a Turing-complete instruction set.

As I mentioned before, a Turing completeness is both fundamental but at the same time actually a pretty low bar. As an exercise for myself -- ignoring here the silliness of one-instruction computers -- this is my first take on such minimalism; I'm pretty sure that a real computer with just the following four instructions would be Turing complete, and I challenge anyone to write any meaningful program at all without these operations. This ought to definitively answer the question of why a computer's instruction set needs to be Turing complete. And given that I'm pretty sure this is Turing complete -- that these instructions can literally perform any computation that is computable or any logical operation imaginable -- my challenge back to you is to tell me what awesome "extended capabilities" are present here!

ADS (add memory location to register, and skip if not equal to zero)
DCR (deposit register to memory and clear register)
JMP (jump to specified location)
AND (AND memory with register)


Quote:
Originally Posted by begbert2 View Post
Just to make a "are we on the same page" check, the subject I'm discussing with you is the definition of the word(s) "computation"/"computational". These words are a categorization of a particular type of behavior - a thing is either doing computation, or it isn't. A thing's behaviors are either computational, or they're not. A thing is either a computer, or it's not.

I'm of the opinion that if a thing is computing, then it's computing. If a thing is Turing complete, it's Turing complete. These terms are objective descriptions about the state and properties of the object, not subjective assessments.

If a device is Turing complete, then that device is Turing complete even if it's locked in a drawer. The fact that you can't get to it and fiddle with its inputs is utterly irrelevant to the device's status as a Turing complete device.
I understand your point but you don't seem to be understanding mine. Let me restate the aforementioned general principle I mentioned earlier, using perhaps clearer words: the computational properties of an embedded subsystem are not necessarily exposed to (exported to) the larger system in which it's embedded.

An embedded microprocessor (LBA) in isolation is itself Turing complete of course, no argument about that. But its functional role in the larger system in which it's embedded is limited by the functionality exported by the interface of its fixed software; nothing else is available. It's the classic "black box", nothing more, no matter what's inside it. So a system the implements Turing-complete functionality, like one that uses its embedded microprocessor to implement a LISP interpreter, is Turing complete, but one which uses it to implement a basic calculator is not. I find that I'm repeating myself, but I trust that makes it clear, or at least clearer.
  #603  
Old 07-27-2019, 06:28 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,891
Quote:
Originally Posted by wolfpup View Post
Let me try to wrap this up in the following way, which I think addresses what this all boils down to. Suppose I write a program in a particular (Turing complete) language, and that after running it for an hour (on a computer with a Turing-complete instruction set (LBA)) it produces the desired results.

Now, at any given time this program and all its data and state information consist of an enormous string of bits in the computer's memory, and its execution trace can be represented as a finite set of state transitions over that time (not Turing complete). It could be drawn as an enormous state transition diagram. So what's the value of having Turing-complete precursors -- the Turing-complete specification language that created it and the Turing-complete platform that executes it?

[...]

If they are performing identical tasks then they are identical Turing machines. Turing completeness implies no specific functional capability other than what is inherent in any Turing machine, and reflects the nature of a Turing machine to be able to simulate the fixed computation of any other.
These seem in straightforward contradiction to one another. First, you (correctly) say that for any LBA, there exists an equivalent, non-Turing completable FSA which performs the exact same task; then, you say that if two systems perform identical tasks, then they are identical Turing machines. Both can't be true: only one of these systems is Turing-completable (by extension of its memory).

Quote:
In the real world, Turing completeness in a real computer is reducible to (I believe) a minimum of four instructions, ignoring the contrived silliness of one-instruction computers. The question is meaningless here, too, because no non-trivial program could run on a computer that lacks a Turing-complete instruction set.
This is simply false. There are many examples of non-trivial programs written in non-Turing complete languages. Regular expressions, for instance, are non-Turing complete (in fact, equivalent to FSMs), but you can do lots of useful things with them. For a more contrived example, take Douglas Hofstadter's BlooP. The wikipedia page lists programs for computing factorials and subtracting numbers, both of which are non-trivial. BlooP can be extended to a Turing-complete language by allowing unbounded loops; that is, adding something like a 'do...while' command, whereas FlooP only supports something like 'for i = 1 to N'.

This also illustrates my example. Any valid BlooP-program is also a valid FlooP-program; the extended capabilities come in the form of an additional 'mu-loop' instruction.

More generally, any recursive set can be decided by a non-Turing complete automaton (a machine that's guaranteed to halt). This class contains very non-trivial sets, such as the prime numbers, the Fibonacci numbers, and many others, all of which can be computed using a BlooP-like language.

Quote:
As I mentioned before, a Turing completeness is both fundamental but at the same time actually a pretty low bar.
So, as you see, for many non-trivial problems, Turing completeness is wholly unnecessary. And, of course, we already know that non-Turing complete automata can do anything a brain can do, simply because of the brain's finiteness.
  #604  
Old 07-27-2019, 03:54 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,745
Quote:
Originally Posted by wolfpup View Post
The question is meaningless here, too, because no non-trivial program could run on a computer that lacks a Turing-complete instruction set.
Language parsers are based on pushdown automata.

Pattern matching using pushdown automata:
http://www.stringology.org/papers/Zd...hesis-2010.pdf

The list of non-trivial programs that can be written using non-turing complete instruction set is enormous (and that is an understatement).
  #605  
Old 07-27-2019, 05:21 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,745
Regarding the need for a UTM (limited by memory of course):
Consider a machine with the following attributes:
1 - 1,000,000,000,000,000 times the memory of the max memory of a human brain
2 - A turing complete instruction set with one exception:
the completion of each instruction involves a check if every memory element has the value -27, and if this is true it changes the first memory element to -28.


This machine is capable of computing many functions that the human brain can't (due to increased memory) but it can't compute all computable functions that a UTM is capable of (at minimum it can't compute any function that ends with all memory elements set to -27).
  #606  
Old 07-28-2019, 01:41 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,182
Quote:
Originally Posted by Half Man Half Wit View Post
These seem in straightforward contradiction to one another. First, you (correctly) say that for any LBA, there exists an equivalent, non-Turing completable FSA which performs the exact same task; then, you say that if two systems perform identical tasks, then they are identical Turing machines. Both can't be true: only one of these systems is Turing-completable (by extension of its memory).
What?? An FSA is not a Turing machine! It's the capture and reduction of its functionality into a state machine. I'm saying that any LBA that performs the same function as another is functionally identical regardless of how large or how small their memory capacities, that they all reduce to the same equivalent FSA, and that there are no mysterious "extended capabilities" in the original LBA that are absent in its reduction -- they have the same functionality. To repeat the earlier statement, the premise of some "extended capabilities" in one system versus another when both are performing identical tasks is an obfuscatory fiction. There is no contradiction here.

The salient point about Turing completeness, again, is the following:

We don't (except in special cases) intentionally build finite state machines per se, what we build are logical descriptions [i.e.- programs] that reflect the way we think about the world, and the essential function of computers is to instantiate those descriptions in a completely general way. The premise of CTM is that cognition also works in a completely general way using exactly parallel kinds of operations.


Quote:
Originally Posted by Half Man Half Wit View Post
This is simply false. There are many examples of non-trivial programs written in non-Turing complete languages. Regular expressions, for instance, are non-Turing complete (in fact, equivalent to FSMs), but you can do lots of useful things with them.
Indeed you can, and I know this well because back in the day when I was annoyed by a lot of spam emails, before spam filtering became mainstream, I wrote tons of regexes to nuke spam right off the server before I ever saw it. But of course these regexes were interpreted by an actual mail-reader program running on an actual computer.

You might want to rethink your claim of falsehood since if you read more carefully, that is clearly not at all what I said. All of these things -- regular expressions, or the SQL and HTML examples you mentioned earlier -- are all generally dependent on underlying software and hardware layers that are, in fact, programs specified in Turing-complete languages running on hardware platforms with Turing-complete instruction sets. No doubt there are some special-case exceptions, but the preceding is overwhelmingly the general case.

I note that you failed to either quote or respond to my challenge in #602: what "extended capabilities" do you find in these four instructions that I believe constitute a simple but Turing-complete instruction set that could be dispensed with and still produce a generally useful computer? Is it clear why each of these functions is essential for computability -- for the Turing sense of having "computational" capacity?
  #607  
Old 07-28-2019, 03:44 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,745
Quote:
Originally Posted by wolfpup
because no non-trivial program could run on a computer that lacks a Turing-complete instruction set.
Quote:
Originally Posted by raftpeople
Consider a machine with the following attributes:
1 - 1,000,000,000,000,000 times the memory of the max memory of a human brain
2 - A turing complete instruction set with one exception:
the completion of each instruction involves a check if every memory element has the value -27, and if this is true it changes the first memory element to -28.
My example counters your point. It can compute every function that doesn't rely on every memory element having a -27 at the same time.

Your point is false.
  #608  
Old 07-28-2019, 04:56 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,891
Quote:
Originally Posted by wolfpup View Post
What?? An FSA is not a Turing machine! It's the capture and reduction of its functionality into a state machine.
Yes, this is exactly the point. Let's review this: you claim that universal capabilities are necessary for cognition (or any kind of nontrivial computation); I point out that any cognitive agent's computational capabilities can be captured by an FSA, which is non-universal. So now you may either opt to retract the claim that universality is necessary for cognition (which is what mainstream CTM would do, as it's the computation of a given function that's what underlies cognition, not whether that computation is carried out on a universal system), or you must hold that the FSA-based system wouldn't actually be a cognitive agent, which leaves you with something that would be behaviorally equivalent to its LBA-based cousin, but without the cognition to support this behavior---a kind of cognitive zombie.

However, you seem to want to reject both options; but this is inconsistent.

Here, you affirm the equivalence between LBAs and FSAs:
Quote:
I'm saying that any LBA that performs the same function as another is functionally identical regardless of how large or how small their memory capacities, that they all reduce to the same equivalent FSA, and that there are no mysterious "extended capabilities" in the original LBA that are absent in its reduction -- they have the same functionality.
But in combination with your earlier claim of the necessity of universal computation for cognition:
Quote:
Originally Posted by wolfpup View Post
Only the UTM is Turing-complete, and this is the model for the instruction sets and programming languages of digital computers, and the processes of cognition according to CTM; this is what we mean by "computation" in those contexts.
We're just left wondering how you think that holding both could be consistent.

Quote:
Originally Posted by wolfpup View Post
You might want to rethink your claim of falsehood since if you read more carefully, that is clearly not at all what I said. All of these things -- regular expressions, or the SQL and HTML examples you mentioned earlier -- are all generally dependent on underlying software and hardware layers that are, in fact, programs specified in Turing-complete languages running on hardware platforms with Turing-complete instruction sets.
This is again false. Yes, if executed on a modern-day computer, there is a Turing-universal substrate on which they are run, but that's just by convenience. Essentially, what you have is a Turing-complete machine emulating a non-Turing complete machine (the HTML, SQL, or regex-machine); this doesn't change anything about these machines' non-Turing completeness. In particular, regular expressions can be evaluated with FSAs, and hence, it's just not true that you need a Turing-complete language to interpret these non-Turing complete frameworks. (Which would, after all, be kind of strange, and really trivialize much of computability theory.)

Quote:
I note that you failed to either quote or respond to my challenge in #602: what "extended capabilities" do you find in these four instructions that I believe constitute a simple but Turing-complete instruction set that could be dispensed with and still produce a generally useful computer? Is it clear why each of these functions is essential for computability -- for the Turing sense of having "computational" capacity?
There are, of course, exceedingly simple examples of Turing-complete languages. Arguably much simpler than yours is the cellular automaton rule 110. But this doesn't imply that any more complex system (no matter what 'more complex' might mean in this regard) is necessarily Turing-complete. For instance, the Colossus computer used in early cryptanalysis wasn't Turing-complete, but did some rather non-trivial things, I'd say.

And of course, there are all the programs you can write in BlooP, which can be executed on a computer that isn't Turing-complete. The example also makes clear, I had thought, what I mean by 'extended capabilities': add an unrestricted loop instruction (yielding FlooP), and you jump all the way to Turing completeness. But that doesn't mean that programs that don't use this instruction aren't properly 'computations'---yet, they're programs that could be run on a non-universal machine.

Last edited by Half Man Half Wit; 07-28-2019 at 05:00 AM.
  #609  
Old 07-29-2019, 07:03 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,182
Quote:
Originally Posted by Half Man Half Wit View Post

However, you seem to want to reject both options; but this is inconsistent.

Here, you affirm the equivalence between LBAs and FSAs:


But in combination with your earlier claim of the necessity of universal computation for cognition:


We're just left wondering how you think that holding both could be consistent.


This is again false. Yes, if executed on a modern-day computer, there is a Turing-universal substrate on which they are run, but that's just by convenience. Essentially, what you have is a Turing-complete machine emulating a non-Turing complete machine (the HTML, SQL, or regex-machine); this doesn't change anything about these machines' non-Turing completeness. In particular, regular expressions can be evaluated with FSAs, and hence, it's just not true that you need a Turing-complete language to interpret these non-Turing complete frameworks. (Which would, after all, be kind of strange, and really trivialize much of computability theory.)
It's much more than just "convenience". There's a reason that virtually all programming languages are Turing complete, why all general-purpose computers are, and why the Turing machine is so fundamental in computational theory. The counterexamples you've mentioned are not really programming languages at all: SQL is just a data management language, HTML is just a page description or markup language, regular expressions are just pattern-matching specifications, etc. And there's an even more powerful reason that the underlying machine must be Turing complete, since a machine's instruction set must be able to support in a general way any possible program (computational specification) that runs on it. Hence my statement doubting that any non-trivial program could run on a computer that lacks a Turing-complete instruction set, which you misstated as claiming that no non-trivial program could be written in a non-Turing complete language, and then further distorted by citing languages that weren't programming languages at all.

That a finite-state machine representation of any specific program running on a computer is not Turing complete while being computationally equivalent to such a program is not a particularly interesting fact, since the FSA just an abstraction that only represents the capabilities of the computer when it's running that specific program. (Although the FSA would, in fact, be Turing complete if it happened to be an interpreter for a language that was itself Turing complete.) It is precisely the distinction between a Turing machine and a universal Turing machine.

Hence, regarding your first point, the equivalence between LBAs and FSAs is not in contradiction to the premise that LBAs must be Turing complete in order to achieve the computational capabilities required for cognition. One might argue about whether that premise is true or not, but it's not contradicted by LBA-FSA equivalence. Perhaps your perception of a contradiction is in the fact that both the LBA and FSA are obviously performing the same computation, but that isn't the point. The point is the premise that the underlying mechanisms of cognition don't have a fixed function, but operate like a general purpose computer that can implement any computable function (and conversely, that therefore cognitive processes can be instantiated on any LBA with suitable capacity).

I think we've thoroughly exhausted this topic by now, but I wanted to make one final effort to explain my argument.
  #610  
Old 07-29-2019, 12:26 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,891
Quote:
Originally Posted by wolfpup View Post
I think we've thoroughly exhausted this topic by now, but I wanted to make one final effort to explain my argument.
Your argument is sufficiently clear, it's just wrong---if there exists an equivalent FSA for a given computation, that computation can be performed by a machine lacking a universal instruction set, for instance, one that merely executes bounded loops, like BlooP.
  #611  
Old 07-29-2019, 04:27 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,745
Quote:
Originally Posted by wolfpup View Post
...Hence my statement doubting that any non-trivial program could run on a computer that lacks a Turing-complete instruction set,...
Which has been easily countered.



I know this thread is basically dead at this point, but I thought I would explain my post that you considered condescending.

There were two possibilities:
1 - You were not aware that a calculator has a general purpose processor
or
2 - You were aware of that fact and you were going to continue to believe that it wasn't computation even though that position means that two systems that are the same in all significant respects are different from the perspective of computation (meaning computation is not based on a physical attribute)


Option #2 is the far worse assumption. I believe begbert called that position "insane". I considered both positions and because of how far out in left field position #2 is, I made the more favorable assumption about you, that you just didn't know that calculators had general purpose processors.


So what you thought was a condescending post was really the far more favorable post compared to the alternative.
  #612  
Old 07-29-2019, 04:50 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,494
Quote:
Originally Posted by wolfpup View Post
I understand your point but you don't seem to be understanding mine. Let me restate the aforementioned general principle I mentioned earlier, using perhaps clearer words: the computational properties of an embedded subsystem are not necessarily exposed to (exported to) the larger system in which it's embedded.

An embedded microprocessor (LBA) in isolation is itself Turing complete of course, no argument about that. But its functional role in the larger system in which it's embedded is limited by the functionality exported by the interface of its fixed software; nothing else is available. It's the classic "black box", nothing more, no matter what's inside it. So a system the implements Turing-complete functionality, like one that uses its embedded microprocessor to implement a LISP interpreter, is Turing complete, but one which uses it to implement a basic calculator is not. I find that I'm repeating myself, but I trust that makes it clear, or at least clearer.
I instantly agree that a device that has an interface that disallows access to its UTM innards does not function as a UTM to the outside user.

But the question is whether the box is doing "computation". It has been claimed that "computation" could/should be defined as "what a UTM does when it does its thing". So, what to make of a situation where:

There is a user (U), a box (B), and a UTM contained within the box.
The box only has one button and one light on it. When the button is pressed, the box generates a random program and random data, feeds them into the UTM, and then runs the UTM for up to 10000 cycles. If the UTM halts before then, then the light illuminates. If it doesn't, then the light turns off.

So: The box clearly isn't a UTM, because you can't be a UTM unless you have a way to input programs. But the UTM is both clearly a UTM (by definition) and it's clearly behaving in a UTM manner, doing UTM things. If "computation" is a thing UTMs do, then it's definitely doing it.

Your argument, though, appears to be that because B isn't a UTM, then nothing within B can be doing computation, because B isn't a UTM.

I strongly disagree. The casing you put a processor in doesn't change what the processor is doing. If UTMs do computation when they do what they do, then they do it even if the UTMiness of it all is concealed from all outside users.
  #613  
Old 07-29-2019, 05:16 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,745
Additional thoughts:
It seems that one of the key point of CTM is that a human brain has the flexibility to learn and execute arbitrary algorithms on symbols. If there is one single function that the human brain can't compute (e.g. -27's in all memory elements) that renders the brain slightly less powerful than a turing complete system, the general point of CTM does not lose value.

Under no condition would the human brain have been able to compute all functions anyway, so whether the limitation is due to amount of memory or due to slight weaknesses in a particular instruction set doesn't really change much.
  #614  
Old 07-30-2019, 10:11 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,182
Quote:
Originally Posted by Half Man Half Wit View Post
Your argument is sufficiently clear, it's just wrong---if there exists an equivalent FSA for a given computation, that computation can be performed by a machine lacking a universal instruction set
But since any computation can be reduced to an FSA, this is a claim that there is no computation that requires a Turing-complete instruction set. This would come as a surprise to the designers of general-purpose digital computers, all of which have Turing-complete instruction sets. It would also come as a surprise to Alan Turing, since a Turing-complete instruction set is required (by definition) to simulate any Turing machine, which is itself the definition of universal computability.
Quote:
Originally Posted by RaftPeople View Post
There were two possibilities:
1 - You were not aware that a calculator has a general purpose processor
or
2 - You were aware of that fact and you were going to continue to believe that it wasn't computation even though that position means that two systems that are the same in all significant respects are different from the perspective of computation (meaning computation is not based on a physical attribute)


Option #2 is the far worse assumption. I believe begbert called that position "insane". I considered both positions and because of how far out in left field position #2 is, I made the more favorable assumption about you, that you just didn't know that calculators had general purpose processors.


So what you thought was a condescending post was really the far more favorable post compared to the alternative.
Maybe it's fitting to finish off this thread with a one-time exception to my promise not to respond to you any more, because I just have to say that this is one of the most bizarre lines of reasoning that I have ever seen, on any subject.

First, the condescending "gotcha" that I apparently was not aware of the astounding fact that electronic calculators can be made out of microprocessors is not only condescending but is one of the most remarkable examples of absolutely utter irrelevance I've ever seen. They're made that way today because microprocessors are cheap, handy devices for implementing miscellaneous logical operations; early electronic calculators, of course, were NOT made that way (they were made with discrete transistors and special-purpose ICs at least until the advent of microprocessors like the Intel 4004 in the early 70s) and before electronic calculators existed, earlier versions were made with mechanical gears. Since they are all equivalent calculators performing exactly the same functions, the mind boggles at what you think the relevance of the microprocessor here is. Not to mention that I've already explained at length -- several times -- the general case of why the presence of an embedded microprocessor in itself has no bearing on the functional properties of the system in which it's embedded.

Second, I have never claimed that the computation performed by one kind of machine is in any way different from the same computation performed by any other kind if they are in fact the same computation. This is an absurd misinterpretation that you keep repeating. The factual assertion is that "computational" in CTM specifically refers to the Turing model of the stored-program digital computer -- not to the nature of any particular computation it performs, but to its ability to execute a sequence of stored instructions on stored data in a completely general way such that it can perform any possible computation (within the limits of its memory and available time). When implemented on digital computers, it refers to the computational capability that is commonly referred to as the von Neumann architecture. Obviously, the substrate of the brain is quite different, but the model is important in illuminating the premise that higher level operations on mental representations are analogous to computer instructions processing symbols, and that cognition emerges from exactly the same kind of computational process.

Quote:
Originally Posted by begbert2 View Post
Your argument, though, appears to be that because B isn't a UTM, then nothing within B can be doing computation, because B isn't a UTM.
Then you have not understood the argument, because I didn't say that. Note the statement that "the computational properties of an embedded subsystem are not necessarily exposed to (exported to) the larger system in which it's embedded" (emphasis added). Emphasis also added to the quote below:

Quote:
Originally Posted by wolfpup View Post
So a system the implements Turing-complete functionality, like one that uses its embedded microprocessor to implement a LISP interpreter, is Turing complete, but one which uses it to implement a basic calculator is not.
  #615  
Old 07-30-2019, 11:52 AM
UY Scuti's Avatar
UY Scuti is offline
Guest
 
Join Date: Dec 2013
Location: Bucharest, Romania
Posts: 983
Quote:
Originally Posted by RaftPeople View Post
I know this thread is basically dead at this point[...]
It's been great to read this conversation, especially when I had the chance to follow the argumentation of a Higher Self rather than the manifestation of an ego. Thank you everybody for the good moments.
  #616  
Old 07-30-2019, 12:18 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,494
Wait, the thread is over? I had no idea.

Quote:
Originally Posted by wolfpup View Post
Then you have not understood the argument, because I didn't say that. Note the statement that "the computational properties of an embedded subsystem are not necessarily exposed to (exported to) the larger system in which it's embedded" (emphasis added). Emphasis also added to the quote below:
So if we chose to define "computation" as "was a UTM does", then calculators do computation if one or more of their active components is a UTM. Yes?
  #617  
Old 07-30-2019, 12:39 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,745
Quote:
Originally Posted by wolfpup View Post
But since any computation can be reduced to an FSA, this is a claim that there is no computation that requires a Turing-complete instruction set. This would come as a surprise to the designers of general-purpose digital computers, all of which have Turing-complete instruction sets. It would also come as a surprise to Alan Turing, since a Turing-complete instruction set is required (by definition) to simulate any Turing machine, which is itself the definition of universal computability.
You are confusing capability (turing complete can be used to compute more functions than a non-turing complete machnes) with requirement (turing complete is not required for a specific computation, as shown by my non-turing complete system that can compute everything a turing complete system can compute except for functions in which all memory is -27).

This is the key point you've been missing:
capability vs requirement




Quote:
Second, I have never claimed that the computation performed by one kind of machine is in any way different from the same computation performed by any other kind if they are in fact the same computation.
Yes you have, multiple times.
  #618  
Old 07-30-2019, 12:58 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,891
Quote:
Originally Posted by wolfpup View Post
But since any computation can be reduced to an FSA, this is a claim that there is no computation that requires a Turing-complete instruction set.
No. FSAs recognize only regular languages; a language such as {anbn} with n ranging over the natural numbers isn't recognized by any FSA (but by a TM). Of course, in any finite stretch of time, there will be an FSA matching that TM's performance, but if that's what you mean, then that just says that the question of universality never matters in a finite world (and thus, in particular, doesn't matter for cognition).

Quote:
Originally Posted by wolfpup View Post
It would also come as a surprise to Alan Turing, since a Turing-complete instruction set is required (by definition) to simulate any Turing machine, which is itself the definition of universal computability.
You may have just been speaking sloppily here, but that's also false: for any non-universal TM, a non-Turing complete instruction set is sufficient.

Last edited by Half Man Half Wit; 07-30-2019 at 01:00 PM.
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 11:07 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright 2019 STM Reader, LLC.

 
Copyright © 2017