Downloading Your Consciousness Just Before Death.

It occurs to me to question whether it’s useful at all to pursue an argument that asserts that purpose-built components (like math coprocessors) don’t do “computation”. Because the question isn’t really whether cognition can be implemented by computation; it’s whether cognition can be implemented by computers. And computers can absolutely include purpose-built components.

(I’m also highly dubious about claims that purpose-built components could exist that cannot be wholly emulated by a computer simulation, though I suppose some allowance regarding devices that siphon randomity off of decaying atoms should be paid lip service.)

Exactly, there seems to be no value in that odd position, which is probably why I can’t find any support that kind of thinking. I’ve been googling but every philosopher, neuroscientist and computer scientist that I find seems to not discard computations just because they were performed by a limited turing machine.

There is certainly value in being clear that CTM also includes the type of general purpose computation in which a learned algorithm can be applied to arbitrary sets of symbolic input, but that position doesn’t require discarding the computations performed by more limited machinery.

First of all, whose question are we talking about here? The question that started this ridiculous digression was what was meant by the word “computational” in “computational theory of mind”, and my point was that what is meant is computing in the sense of a stored-program digital computer, and hence the references to the Turing model when these theories are described. To the extent that CTM theories apply – and no one claims that they apply to everything about the mind – the principle of multiple instantiation tells us that such cognitive processes are indeed reproducible on digital computers.

I have not claimed that coprocessors don’t do “computation” in certain meaningful senses of the word, but one would correctly conclude from my comments that most such coprocessors are not Turing complete. I mentioned earlier that the humble PDP-8 with its eight discrete instructions was, like any general-purpose computer, Turing complete (in the restricted sense of having finite memory, so more precisely a linear bounded automaton). But a coprocessor for it like the Extended Arithmetic Element (EAE) which added multiply and divide instructions was not, though it certainly performed calculations.

But if you had the budget to get a floating point processor (FPP) for your PDP-8, you were dealing with something quite different. The FPP not only added floating point instructions, it also sought to overcome the limitations of the basic PDP-8 instruction set by adding a whole host of new general-purpose instructions in a new double-word format with greatly expanded directly addressable address space, index registers, and a variety of double-word test and branch instructions. It was sufficiently complete that one could write any arbitrary program in the FPP instruction set alone, and indeed this was the reason that when a full-fledged implementation of FORTRAN IV became available for the PDP-8 (a language that itself is of course Turing complete) the FPP was a prerequisite, because the entire output of the compiler was in the FPP instruction set (that prerequisite was later dropped, but only because someone wrote an FPP interpreter – which incidentally is a fine illustration of the fact that the humble PDP-8 with its eight instructions was Turing complete).

Thus, the FPP coprocessor was itself Turing complete – essentially a co-equal parallel processor – but the EAE very clearly was not, even though one could argue that “it performs syntactic operations on symbols”. The FPP was a computer in its own right. The EAE, with its implementation of MUY and DIV instructions, was a kind of calculator add-on. When the FPP was in control, the program counter reflected the flow and branching of the FPP double-word instruction set. The EAE was just a dumb calculator effectively bolted on to the PDP-8 to make certain calculations faster.

I trust this clarifies the distinction I was trying to make.

That was interesting technical/historical stuff, in that section I snipped out for brevity!

It’s definitely the case that this ridiculous digression was based on the difficulty in defining “computation”. Heck, most of the last 500 posts in this thread are due to the difficulty in defining “computation”. In my previous sojourn here three weeks ago the impression I was getting was that (philosophically speaking) the term has nothing at all to do with what the object in question is doing, but rather was entirely based on whether some outside observer is choosing to interpret the observable outcomes of the object as being computational or not! Which is, of course, just silly.

Here’s the thing about Turing machines and Turing completeness: You don’t have to be a Turing machine to be Turing compete. And in fact computers aren’t Turing machines; a Turing machine has a read/write head that runs back and forth on a tape, reading and doing stuff to the spot its head is looking at. Computers aren’t built like that - and neither are brains. Nothing is built like that. There are no Turing machines in use in reality that I’m aware of.

This puts non-universal Turing machines in a weird position.

When we realize that we’re not talking about literal Turing machines, but rather stuff that just can do the things Turing machines do, then applying the same transformation to non-universal Turing machines means that you’re now just talking about anything at all that can replicate the behavior some specific concrete Turing machine. Any specific concrete Turing machine. And there are Turing machines that do nothing at all. So that means that a rock becomes a contender for “something that can emulate a specific concrete Turing machine”.

If the definition of “computation” is “has effects comparable to any single non-universal Turing machines”, then damn near everything is doing computation - once you take the tape and reader head away a Turing machine just becomes something that has states, alters its state based on external forces in a deterministic way, and can effect other things depending on its state in a deterministic way. If you consider “current position” to be part of a thing’s state, then pretty much everything can be described as interacting with the world based entirely dependent on its state, and thus pretty much everything would be computational. About the only exception would be things that are truly random - these things would be ‘outputting’ based on something that couldn’t be part of their state (since that would be deterministic behavior by definition). Note: I don’t believe in true randomity.

For the record, I’m not sure that this is actually a bad way to define “computation” as it relates to the computational theory of the mind. A computational theory of everything would certainly include minds too! And this is actually the position that the classic “we can simulate minds by simulating all of reality in excruciating detail” argument is taking: that the laws of physics determine the behavior of reality when the various parts of reality are interacting with each other in their various possible states, and thus it’s computational in a way such that all its effects can be emulated by a sufficiently complicated program running on a Turing complete system.

At least a few of the people disagreeing with CTM appear to be directly arguing against this approach - they’re saying that the brain does something non-computational which computers can’t replicate. Presumably they’re using a definition of “computation” that’s less inclusive than “interacts with stuff deterministically”, but I don’t know what that definition would be.

Well of course a digital computer doesn’t literally operate like a Turing machine, but that isn’t really the point. The key idea here, to get down to the basic fundamentals, is that the Turing machine was conceived as an abstraction that has the following key property: it can compute anything that is computable. That’s it. Everything else follows from that, including the conceptualization of the universal Turing machine.

One can get into some pretty silly quandaries, however, if one assumes the converse – that anything that a Turing machine does must be regarded as a computation, because a Turing machine can be defined to do nothing at all, or to just read the first symbol on the tape and halt. Thus Raftpeople’s assertion that “a calculator is a Turing machine” is in the category of “not even wrong”, it’s just simply meaningless, and trying to justify it by saying that a calculator “performs syntactic operations on symbols” is just incoherent nonsense.

The important principle here is that it can be shown that certain devices, like a general-purpose digital computer, can perform exactly the same symbolic operations as a Turing machine, and so we can conclude that, within the limits of time and memory capacity, such a device is a restricted form of Turing machine that can also (within those limits) compute anything that is computable. This is a profound observation with foundational implications for both the entire field of computer science and for much of cognitive science. It ultimately has implications about the fundamental nature of intelligence and the ability to instantiate it on different physical substrates. The Turing model allows us to establish that there is a class of such equivalent devices, which includes digital computers and, according to CTM, the cognitive functions of the human mind. The common property of such Turing-equivalent devices is what I mean by “computational” in the context of this discussion. It should be clear by now why it does not include devices like calculators, or a random collection of logic gates, or the EAE add-on to the PDP-8 that I was reminiscing about.

So in your eyes “computational” is synonymous with “Turing complete”, essentially?

It occurs to me that even if the brain is not computational (Turing complete), that fact is not evidence that the brain can’t be wholly emulated on a computer. Ten-key calculators can be wholly emulated on computers, after all.

Then maybe you can clarify why a PC computes but a calculator doesn’t compute.

They both have general purpose processors, memory and are running programs. The calculator happens to be running a program that was loaded into ROM.

Is the issue that it is executing a program that is in ROM?

If a PC was running a calculator program (only, no OS), would the PC stop being computational?

You know, when a sentence begins with “Thus …” it’s generally a clue that it’s the conclusion of an explanation that precedes it. I find it hard to believe that you genuinely don’t understand this point after it was so clearly explained in that post. I suggest that you go back and read it carefully this time, and also read the last paragraph again.

Wolfpup, any computation performed by a real brain can be performed by an appropriate non-Turing complete FSM, simply due to the fact that the brain’s lifetime is finite. Would there be any cognitive difference between an entity governed by the (presumably Turing-complete) brain and the entity governed by the FSM?

My immediate understanding of this rather perplexing question prompts the answer, no, but so what? I note that you loosely throw around terms like “an appropriate non-Turing complete FSM”, which, like the concept of an arbitrary Turing machine, can be arbitrarily trivial. Which is the same fallacy that RaftPeople went off on with his calculator example. What we generally mean by Turing equivalence in the real world is not just some “appropriate” FSM, but the abstraction of a linear bounded automaton. I think I laid out my basic thesis pretty clearly in post #585. At this point I may as well directly address the challenge that RaftPeople posed since he’s probably just going to come back with more gotcha question attempts. “If a PC was running a calculator program (only, no OS), would the PC stop being computational?” How about a computer programmed so that the only thing it does is respond to any input by halting? IOW, it does nothing. Does that device “compute”?

This of course completely misses the whole point (again, #585). This is the same fallacy as the calculator example earlier, and he will continue to misunderstand this issue as long as he thinks of Turing-equivalent computationalism in terms of being “a thing that a device is doing” instead of what it really is: a generalized capability that a device has, namely the capability to perform any computation that it’s possible to specify. This was Turing’s insight, and the one that’s been adapted into the model of the cognitive mind implicit in CTM.

The point is that during its finite lifetime, a brain, even though it is ‘in principle’ universal in the sense that it could carry out arbitrary computations if equipped with sufficient resources, can only actually implement a limited subset of computations. There then exists a non-Turing universal system that can only implement those functions. Replacing a brain with that system then will yield a functional and behavioral duplicate of the original entity.

Now, there are two options: either, the system is also a cognitive duplicate—will have the same thoughts, beliefs, and the like. Then, the requirement of computational universality you seem to want to impose is just a red herring.

Or, the system won’t be a cognitive duplicate. Then, you’ll have the odd situation that there may be systems that talk, act, and behave like they are cognitively human-like creatures, but won’t be—a kind of zombie problem.

I think most—including you—would reject the second horn of this dilemma. But then, that’s where our puzzlement at your requirement of universality for ‘properly computing’ systems stems from.

Furthermore, the sort of distinction you’re drawing is just profoundly odd from a computational standpoint. If cognition is akin to some computation, then whether that computation is performed on a universal or a special-purpose computer should not have any influence on whether it’s cognition properly so-called—not anymore than the same calculation performed on a universal system versus a simple calculator are in any way different sorts of things.

The calculator has the same underlying capabilities as a personal computer but with less memory.

So help me understand why a calculator doesn’t compute but a PC does compute.

Is it because we loaded the program into ROM?

Maybe you just didn’t realize that calculators have turing complete processors and are programmed with languages like C. If so, just state that and let’s move on. If not, please explain because I really don’t understand why computer A doesn’t compute but computer B does compute.

Although the issue that HMHW describes is the original and primary issue, this issue about a calculator not computing even though it’s the same as a computer is a valid point because you seem to be stating that even a turing complete machine loses it’s ability to compute under specific conditions.

The follow up would be to make sure the brain doesn’t hit the same types of conditions. For example, if the issue is that the program is loaded from ROM and the calculator can’t escape that programming, how do we know that the brain doesn’t do the same thing? When you’ve learned an algorithm and have used it for decades and then someone tries to get you to do it a different way but you can’t at first, are you not computing when that condition arises?

The problem with that reasoning is that “non-Turing universal” (non-Turing complete) doesn’t really define anything because, as we have seen, it can be arbitrarily trivial. I agree with you that option #2 would not be a cognitive duplicate, since some special-purpose system designed to mimic particular behaviors would likely, among other things, fail to evolve in response to new stimuli as a human would. But I think your logic is flawed with regard to the first option, because you jump from “can only actually implement a limited subset of computations” to “there then exists a non-Turing universal system that can only implement those functions” which is many steps too far. The limitations of a physical system do not reduce it to some arbitrary non-Turing complete status; rather, they reduce it to the very specific status of a restricted Turing machine like a real computer, a linear bounded automaton, basically equivalent to a UTM with a bounded tape. The former is essentially undefined, while the latter defines a stored-program digital computer, and this is necessarily the model for the “computational” element of CTM. The two things are very substantially different.

Earlier I mentioned the Turing-complete PDP-8 computer (again, technically a restricted Turing machine) with just 8 instructions. I think one could get that down to just 4 or 5 instructions and still retain all the necessary prerequisites for Turing completeness. (Turing completeness is actually a pretty low bar; Wolfram showed that a Turing machine with just 2 states and 5 symbols could be universal, and controversially, so could one with 2 states and just 3 symbols.) The interesting question with respect to the general problem of cognition, or intelligence if you will, is what happens if one takes it down further, so that Turing completeness is lost. I would posit – and this is only my conjecture – that no matter how many interesting advanced instructions you added, lack of Turing completeness would preclude the kind of analytical and decision-making power that we associate with true intelligence. It would certainly remove it from equivalence with all machines we know of at present that (at least arguably) exhibit such intelligence.

And you think this is for some reason relevant, why? I probably have embedded processors in half my kitchen appliances. There may be one in my doorbell, for all I know. Whether a manufacturer chooses to build a calculator out of an embedded microprocessor instead of discrete logic gates or mechanical gears, or maybe springs and elastics, is totally immaterial to the discussion. What matters is the functionality that I have access to, regardless of how it’s implemented. You still don’t seem to have grasped what this conversation is about, and frankly I find your condescending attitude less than conducive to a productive conversation, so I won’t be responding any further.

Wait just a tick - whether it’s computing or not depends on your access? Are you saying that if I have a computer running that you’re not currently typing a new computer program into, then it’s computing, but if I lock the keyboard in a case and thus make you unable to alter the program, then it no longer is? Because that’s sure what it sounds like you’re saying.

Which of these is computing?

  1. A calculator built in a non-Turing-complete way. Inputs wired directly into the logic and from their to the outputs with no possible way to use the inputs for anything else. (Until you turn the calculator over, turn 58008 (f) into BOOBS (f’), and the world explodes.)

  2. A full-fledged Windows 10 PC that somebody is running Calculator on, and choosing not to interact with any other part of the machine or desktop applications other than the calculator app.

  3. A full-fledged Windows 10 PC that somebody has jiggered to ONLY run the Calculator app, ignoring all other inputs and clicks anywhere else.

  4. A full-fledged Windows 10 PC that somebody has rigged to only run Calculator, and which has been altered to only listen to the numeric keypad for input and which only outputs to a small, LCD-like pane on the screen.

  5. A handheld calculator that is running a full-fledged copy of Windows 10, which only runs an altered version of Calcuator that takes its input from the calculator’s keypad and which only outputs to the calculator’s LCD screen.
    Which of these, by number, are doing computation in your opinion, and which are not?

Yes. More precisely, whether or not a device is Turing complete depends – very obviously – on whether one can use it as such to perform any arbitrary computation. If a system has some internal component that might intrinsically be Turing complete but which I cannot utilize in that fashion, because it’s locked into some fixed function, then the system is not Turing complete. Indeed, a Turing machine with a specific tape and fixed action table isn’t Turing complete either.

Just like some doorbell that might ring different tones at different times of day due to an embedded microprocessor is, in fact, just a doorbell and not a UTM. All the embedded microprocessors in various devices may indeed be Turing complete by my earlier description of that being an intrinsic property of the chip, but unless that property is exposed to a usable interface, it may as well not exist. It tells us nothing about the properties of the system it’s embedded in. One cannot conclude that an appliance is Turing complete just because it has a microprocessor in it, when that microprocessor may do nothing more than run a timer and flip a relay.

Remember that we’re not talking here about “computing” in the colloquial sense, but in the Turing-complete sense. Adhering strictly to the guidelines that you set out, all of those are calculators, because that’s all they do. So you appear to have already answered your own question:

Reordering slightly:

Whoa there - I had assumed that we were specifically discussing a hard-wired, not-and-never-and-no-part-of-it-is-turing-complete device. A simplistic ten-key specifically designed not to include a turing-complete processor.

If I’d know you were going to interpret that as saying that the minute I deign to run the Calculator program on my PC it stops computing because something that looks like a calculator has appeared on the screen, then I would never have said that, because that’s insane.

You do realize that option 2 was about a normal, unmodified, full-fledged computer, right? Just one that I happen to be using to run a calculator program on because I want to run a calculator program. You are literally saying that no computer is Turing complete the moment anybody uses a computer for literally anything, because that that point it’s only emulating one Turing machine, not all of them simultaneously.

Your logic also dictates that computers are only Turing complete while a person is using them - if I walk away from a computer then there is no way to for it to get various variable instructions, because an important input component -the nut behind the wheel- is missing. Thus computers cease to compute -cease to be Turing complete- the moment anyone looks away.

They also must stop being Turing complete during the lulls between one keystroke and the next and between that keystroke and the one after, because during those periods there is no input and thus no Turing completeness, according to what you’re saying.

Um, yeah. Not really feeling a consensus with you here.

Here’s my take on this - a device is either Turing complete, or it’s not. It either has the capability to emulate any Turing machine, or it doesn’t. And this doesn’t change if the device is locked in a room away from users, of it’s installed inside another box that only makes limited use of it. The component, itself, remains turing complete, regardless of whether wolfpup can access all its functions. And if computation is defined as “something a Turing complete device does”, then computation happens when such a device does its thing.

Seriously, there are millions of computers in the world that go for long periods with no human accessing them directly, and which drastically limit even the digital input they’ll accept - they’re called “servers”. The SDMB runs on a machine that you’re not allowed to log into and run Halo on; does that mean it’s not Turing complete?

What’s the difference? I think you’re misunderstanding what I mean by “access”, and admittedly my first sentence wasn’t very clear. The principle that can be elucidated here might be stated as follows: the computational properties of an embedded system are not necessarily exposed to the system in which they’re embedded. The only functionality provided by the system is that which is exposed by the fixed code running in the embedded system(s). In simple terms, a basic ten-key calculator as in your example may or may not be built with a CPU microchip, but if it is, the functionality of the closed system accessible to the user is exactly the same as any other way it might have been built. It matters not a whit if the microchip is a Turing-complete CPU running a program or whether it’s built out of mechanical or discrete logical components. In what way could it possibly matter? Does the fact that the microchip is running a program written in C allow you to write some arbitrary program in C, too? If it does, then the device is no longer just a calculator.

To reiterate the basic point yet again in the most basic possible terms, it’s that cognition is believed to work in a manner analogous to that of a digital computer with a Turing-complete instruction set running a program that operates on a set of symbolic data. A calculator doesn’t do that, regardless of what it’s built on, and regardless of what its internal components may be doing.

Yep, but I’m taking you at your word that the user is “choosing not to interact with any other part of the machine or desktop applications other than the calculator app” – and that this condition holds forever, because if the user ever does, then the scenario is no longer valid. Do you see how there is no distinction whatsoever between that scenario and the one you just described as “a hard-wired, not-and-never-and-no-part-of-it-is-turing-complete device. A simplistic ten-key specifically designed not to include a turing-complete processor”? If there is a functional distinction, please explain what it is.

To make it even more clear, a computing platform that is permanently locked into acting as a LISP interpreter offers a capability that is Turing complete, but the same platform that is permanently locked into acting as a calculator does not; nor does one whose sole dedicated function is to cause my doorbell to ring or my oven to go on.

No, it’s an elementary fact of computability theory. For every linear bounded automaton you can find an equivalent finite state machine—mathematically, one puts this as DSPACE(O(1)) = REG, where DSPACE is the class of computations that can be performed with a specific memory requirement, O(1) denotes this to be constant (making DSPACE(O(1)) the class of LBAs), and REG is the class of regular languages, which are just the languages recognized by an FSA. FSAs are strictly weaker than Turing machines in terms of computation.

Of course, this should be immediately intuitive; after all, any LBA has a finite number of possible states (tape + head configurations), and thus, an equivalent FSA exists just by taking this state space and fitting it out with appropriate state transition rules. It’s of course well recognized that the brain is just such a machine:
[

](Is the brain an effective Turing machine or a finite-state machine? - PubMed)

Also, as pointed out above, any argument that cognition is only properly so-called if the underlying system could implement arbitrary computations if it is extended in the proper way runs into difficulties with the counterfactual nature of this extension. Essentially, the relevance of the presence of capabilities that may never be used (indeed, can never be used in finite time) means that two systems performing identical tasks—being functionally identical—might be cognitively different, simply because one system is extensible in this way, while the other isn’t (say, by blowing up whenever one tries to use the extended capabilities).

Indeed. And so I don’t see how this puts your argument any farther ahead. To begin with, no one would argue that any physical system (whether computer or brain) was Turing complete in the literal sense of having infinite capacity, and when “Turing complete” is used as a shorthand expression to describe a physical system, it’s always understood to refer to a linear bounded automaton, as I previously said.

What I think I misunderstood in your argument was the idea that case #2 (“the system won’t be a cognitive duplicate”) was represented by some arbitrarily limited non-Turing-universal system that would appear to mimic predefined cognitive functions, but could not evolve new ones as an actual brain could. I spelled out that assumption.

But if you want to make the point that the cognitive mind, as an LBA, can be represented as a functionally equivalent finite state machine, we seem to agree that this is a cognitively equivalent duplicate, but I fail to see how it doesn’t also retain all the computational qualities of the original that I claim. After all, “[URL=“Linear bounded automaton - Esolang”]a linear bounded automaton (LBA) is an abstract machine that would be identical to a Turing machine, except that during a computation with given input its tape-head is not allowed to move outside a bounded region of its infinite tape.](Is the brain an effective Turing machine or a finite-state machine? - PubMed)”

Your argument seems closely analogous to an argument that could be made about any digital computer; while acknowledging that its instruction set is indeed Turing complete, its finite memory and finite time could be argued to mean that it can only perform a finite subset of those computations. Therefore one could (in theory) define some less powerful non-Turing, non-LBA, and indeed non-computational paradigm that performs all those same functions, like a humongous lookup table – a discussion we’ve had before. This may make for an interesting philosophical rumination, but it detracts nothing from a description of the real physical machine as having a Turing complete instruction set, and that this is a prerequisite to its fundamental capabilities in the real world, which are absent in a machine lacking such universality. So notwithstanding theoretical equivalences to lesser machines, the quality of Turing-complete universality is a critical part of the architecture of any modern general-purpose computer.

And while I’m at it, carrying that thought forward to your other objection – “whether that computation is performed on a universal or a special-purpose computer should not have any influence on whether it’s cognition properly so-called—not anymore than the same calculation performed on a universal system versus a simple calculator are in any way different sorts of things” – I agree. The distinction is not in any difference in the calculation that is performed, but in the underlying capability set of the machine performing them, that one is a computational device in the Turing sense and the other is not.

Well, but the point I’m making is that it’s not clear why it should matter that the system actually is ‘Turing complete’ in this short-hand way, if there’s an equivalent, non-Turing complete system. Replace one with the other—what changes?

It was your claim that only a universal machine, or one that possesses a universal instruction set, was appropriate for the computational theory of mind:

But the FSA-equivalent to a brain isn’t Turing-complete, and doesn’t possess a Turing-complete instruction set. So you’re positing that there is some quality that the brain, as an LBA, has, but which an FSA, as a system not in principle Turing-completable, lacks.

Where do you make the jump from non-LBA to non-computational, though? FSAs are a perfectly respectable model of computation, even if they’re strictly weaker than Turing machines.

So, again: what’s lacking in the FSA that is a complete computational equivalent to the brain’s LBA? Why is it that the Turing-complete instruction set is a prerequisite to its ‘fundamental capabilities’—and what, exactly are those?

And once more, although I sense that this is going to be one of those points you just keep missing over and over again, what about the example of a system that’s identical to a Turing-completable one, but that just would get blown up if it actually were to try and access this extended capabilities—which, however, it never actually does? Would that be a system possessing these ‘fundamental capabilities’, or not?

But if you agree that whether the system is cognitively human-equivalent doesn’t depend on whether the program is executed on a universal or non-universal device, then in what way does that distinction actually make itself manifest? What difference does it make? If a mind (stipulating, for the moment, that minds are computational) were transferred from a Turing-completable substrate to a non-Turing completable one, would there be any difference to it?