Downloading Your Consciousness Just Before Death.

We are also discussing how irrelevant your use of the Chinese Room argument is, but more importantly: the levels of how irrelevant it is.

Again the point was made to show the progress made, as usual it is made only to show that if you were correct then no progress could be found on finding how the brain does it and to show ways on how to replicate it eventually.

I’ve never appealed to the Chinese Room argument. In fact, I consider it to fail, and have pointed that out in this thread.

The research you pointed to stands in no conflict whatsoever with anything I’ve said.

Good, and in fact computationalism is not just an explanation for cognition, it’s pretty much the only one we have, and as I keep saying, constitutes a major foundation for cognitive science. But your claim has been that cognitive science is wrong, you’ve implied that Fodor was just some guy, and, as with other outdated theories, CTM is not only destined to be discredited but already is. Do I really need to go back and find all the pertinent posts?

There are not a lot of options. The prevailing hypotheses are either that consciousness is an emergent property of minds or that it’s an illusion. Both might be true in some sense as a function of perspective. The former implies that it’s not a “hard problem” in the Chalmers sense but an easy one, although we have to reconcile that with the problem you mentioned, due to Chalmers and others, that some element of that property must already exist in the constituent components (not that I buy this, but it’s a common position). The latter implies that it’s not a hard problem but a non-existent one, but that one has to be reconciled with the Descartian paradox that it’s being proposed by apparently conscious beings.

Now here’s a dude with an entirely different idea. He has three basic tenets: (1) that consciousness is real, (2) that all material things are basically made up of the same kind of stuff, implying a kind of monism, and (3) that emergence of the kind that I proposed isn’t possible (the Chalmers view of necessary constituent qualities). The first two seem reasonable, and introducing the third and seeing where that logic inexorably leads us, in this author’s view, leads us to panpsychism: the doctrine that all material things possess some element of consciousness!

It’s this kind of silliness that leads me to the beliefs that (a) cognition is amenable to a straightforward computational account, and (b) consciousness is an emergent property of cognition. Hence the view that consciousness is computational, and if you don’t like it, then you’re forced into either the interpretation that consciousness doesn’t exist, or panpsychism.

Alternatively one can throw up one’s hands and just go and get drunk, in the spirit of this priceless quote from Fodor: “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness.”

Only to claim that adding something to it makes it then a better argument and yours, it does not. And when telling wolfpup that Searle was not wrong you are still implying that the box is useful.

From early in the discussion:

Of course as pointed before, that is not what researchers look at, adding something to an irrelevant argument is very underwhelming.

Again, it was only to show that when declaring that “there’s no obvious way in which neuron spiking frequencies connect to square roots.” You are implying also that there is no way that the brain can also have neurons that can deal with square roots after practice causes places in the brain to do that.

You’re right, it’s not an explanation, because in explaining the capacity of interpreting symbols by a mechanism dependent on that very capacity, it explains nothing.

But it’s OK. I’ve been through the grieving process myself when I realized that computationalism couldn’t do what it needs to; I guess some of us just spent a little longer in the denial-stage.

And as I keep saying, that nobody has a better idea doesn’t mean it’s a good one, and ignoring the problems of a paradigm because it’s ‘the only game in town’, even if that were true, is antithetical to making progress.

If you do, you might find I’ve not simply claimed that, but provided arguments and citations, instead.

And also, does it never strike you as odd that you feel the philosopher agreeing with you is worthy of more respect, while those that you don’t agree with are just ‘nitwits’?

You say that as if it’s some sort of reductio of the idea. But while I’m not a proponent of panpsychism, why is the idea that matter possesses brute mental aspects any sillier than the idea that it possesses brute physical aspects? Our best current theories of physics depend on lots of properties whose values just seem to be ‘brute facts’; there is no deeper explanation of, say, the ratio of electron to proton mass, or the gauge groups of the standard model; and any proposed deeper explanation just kicks the can further down the road—in string theory, those properties are rooted in the precise way the extra dimensions are compactified, but nobody has any idea where that should come from.

So current physical thinking depends on the existence of physical properties that have no further explanation—they are just so. But then, what makes the existence of mental, or protomental, properties that are ‘just so’ any more silly?

That is obviously not what I said, of course, so I take that to be a sarcastic riff.

So let’s summarize where we are on this. You conceptualized a box with switches and lights and argued that there are multiple fanciful ways of interpreting its output other than binary addition. On the basis of this tenuous argument, which I showed you multiple times and with multiple examples matters not in the least, you nevertheless insist that it invalidates arguably the most fundamental theory of cognition of the past hundred years and the foundation of modern cognitive science. To accept your argument one must accept that Fodor – which the IEP called the most important philosopher of mind of the late twentieth and early twenty-first centuries – made a high-school level logical error in proposing his representational theory of mind.

Let’s see what the Encyclopedia Britannica has to say on the subject. I extracted a couple of the most relevant snippets.
The idea that thinking and mental processes in general can be treated as computational processes emerged gradually in the work of the computer scientists Allen Newell and Herbert Simon and the philosophers Hilary Putnam, Gilbert Harman, and especially Jerry Fodor. Fodor was the most explicit and influential advocate of the computational-representational theory of thought, or CRTT—the idea that thinking consists of the manipulation of electronic tokens of sentences in a “language of thought.”

One of Turing’s achievements was to show how computations can be specified purely mechanically, in particular without any reference to the meanings of the symbols over which the computations are defined. Contrary to the assertions of some of CRTT’s critics, notably the American philosopher John Searle, specifying computations without reference to the meanings of symbols does not imply that the symbols do not have any meaning …

Homunculi
Another frequent objection against theories like CRTT, originally voiced by Wittgenstein and Ryle, is that they merely reproduce the problems they are supposed to solve, since they invariably posit processes—such as following rules or comparing one thing with another—that seem to require the very kind of intelligence that the theory is supposed to explain …

This objection might be a problem for a theory such as Freud’s, which posits entities such as the superego and processes such as the unconscious repression of desires. It is not a problem, however, for CRTT, because the central idea behind the development of the theory is Turing’s characterization of computation in terms of the purely mechanical steps of a Turing machine. These steps, such as moving left or right one cell at a time, are so simple and “stupid” that they can obviously be executed without the need of any intelligence at all.

I have shown that these other functions are exactly as reasonably computed by the system as binary addition, so that if you want to hold that the device computes binary addition—which is the same claim as, say, that a pocket calculator computes square roots—, you must also accept these other functions as being computed by the box. Otherwise, you’re left with a notion of computation according to which it is merely the physical behavior of the system, which is not a notion that can explain how anybody ever computes a square root.

In response to this, as best as I can tell, your strategy was to either claim that there’s somehow a unique way in which binary addition is uniquely computed by the system, or that for such simple systems, their behavior is indeed all that they compute, but that once things get nebulously complicated, semantics just kinda happens. Which of the two depends on what you intend for the table you consider to be the uniquely right one to mean—whether it’s supposed to represent the function of binary addition, or merely the physical evolution of the system.

As for the emergence of semantics, so far, your argument has merely been ‘but Watson does it’—which is just an unsubstantiated claim (no, pointing to the lead researcher’s CV does not constitute substantiation). Since my argument entails that no, Watson doesn’t do it, that mere claim isn’t going to be terribly convincing; there are, after all, many ways of ‘faking’ semantic understanding with all sorts of trickery.

I still don’t see how you intend to settle this matter with popularity contests, but fine. Fodor is out-cited by Searle by a fair margin, although of course a lot of these cites might just be people pointing out how he’s wrong (although they wouldn’t bother if he wasn’t important). Putnam makes the list of ‘most important philosophers of the last 200 years’, Fodor doesn’t (although in fairness, Fodor tops the list of most important philosophers of mind since WWII). Among the 43 most important philosophy books between 1950 and 2000, Putnam has one, Searle has two, and Fodor no entries. Searle’s Rediscovery of the Mind contains the view sometimes called interpretivist pancomputationalism—i. e. that every physical system can be interpreted as computing in multiple ways.

So not only are the people championing the view that the notion of computation is trivialized in the manner I have outlined as highly regarded as Fodor, that very view is contained among some of the most highly influential contemporary philosophical works. So since one of these must be wrong, one of them must have committed a ‘high-school level logical error’, we differ merely on which one. (Of course, claiming it to be such an error is highly tendentious; the correct way of viewing it is just that it’s a difficult matter with many unsettled issues.)

It looks like you accidentally snipped a kinda relevant part there:

Note that this is a very different objection from the one I propose, though, similar only in leading to the same sort of logical regress.

Seems I forgot the link to the list of books above—here it is: Eminent Philosophers Name the 43 Most Important Philosophy Books Written Between 1950-2000: Wittgenstein, Foucault, Rawls & More | Open Culture

And this is in some way different than my quote from IEP about Fodor being “the most important philosopher of mind of the late 20th and early 21st centuries”?

I agree that the debate is all of those things. My major claim is how well-established CTM is in cognitive science today as one of its fundamental foundations, whereas your claim is that CTM is just plain wrong and someday the fools will see the error of their ways, which is a strange claim considering your correct statement above that it’s a profoundly complicated controversy.

I can’t imagine why on earth you would think that snippet was in any way relevant. Recognizing that the SDMB discourages extensive quotations for copyright reasons, and for general succinctness, I tried to keep it as brief and on-point as possible. That particular snippet neither supports nor refutes my argument; it’s just not relevant either way. Fodor himself would have been the first to acknowledge that his representational theory of mind was incomplete, and indeed in his introduction to The Mind Doesn’t Work That Way, he expressed surprise that anyone could have thought otherwise, because it turned out that a lot of people did.

There is, however, a very large difference between “incomplete” and “wrong”, in the sense of “not even remotely the same thing”, and I would encourage you to appreciate that distinction.

Sure seems the same to me in basic principle, even if cast in somewhat different terms, but fine – if you want to claim it’s a different objection, it makes no difference, because the stated refutation of the objection is exactly the same in both cases, namely that a fundamental premise of CTM is that cognition can be modeled on a Turing machine in the sense of being syntactic operations on symbolic representations. Fodor made this premise explicit.

I did not intend to contest that claim, but merely to show its irrelevance. Yes, Fodor is an important philosopher; but that doesn’t entail he can’t be mistaken: Searle and Putnam are also important philosophers (moreso than Fodor, depending on the metric), and authors of some of the most important contemporary philosophical texts, and they share my opposition to his central thesis. That doesn’t tell us that Fodor must be wrong, but it does tell us that an appeal to his stature as a philosopher—as should, really, be obvious—simply doesn’t help settle the issue. Even clever people can be wrong.

I don’t know why it should be strange. Simple matters can lead to complex debates; it happens all the time. Phlogiston is a simple matter, so is the miasma theory of disease; both where the dominant paradigm for a long time, with many incredibly brilliant adherents; yet, neither their well-established nature, nor the cleverness of their defenders prevented them from being wrong.

Well, to somebody not familiar with the debate, the part you quoted without context might seem to say that there’s no trouble at all with symbolic meaning in the case of computation—Turing settled the matter, rote syntactic manipulation is all that needs to be present for the symbols present in computations to have meaning. Whereas the immediately following bit that I provided points out that that’s not, in fact, sufficient if one wants to explain the mind: the meanings of the symbols in ordinary computation are derived from their programmers’ intentions; which is something you can’t appeal to in trying to explain the mind.

How the symbols that are used in mental computation acquire their meanings is then just not explained at all by the computational theory of mind—which is nothing but the point I’ve been making over and over again.

The theory may self-admittedly be incomplete, but there are many that claim (or implicitly assume) that it can be completed, in order to yield a full explanation of the mind. It’s that claim which I think is wrong, and which my argument attacks: computation does not suffice for mind, precisely because it can’t account for the intrinsic intentionality of mental symbols.

After all, that the mind is wholly computational is the premise of this thread—otherwise, you couldn’t well ‘download’ it.

This does not work as a refutation of my argument, because, as the snippet of the Encyclopedia Britannica-article I quoted makes clear, ‘no remotely adequate proposal has yet been made’ regarding how the symbols of the mind acquire their meaning. But this is exactly what I’ve been asking for (with my challenge to uniquely implement f—that is, a computation such that the symbolic vehicles used uniquely possess a specific meaning). If my argument is thus right, the premise that cognition can be modeled by a TM is simply wrong.

Also, the strategy the article proclaims may work for something like the rule-following paradox (although to be honest, I don’t think it’s quite that simple), because rule following behavior may straightforwardly emerge from non-rule following constituents—the large-scale behavior of systems with random constituents may follow effective rules, and thus, we get just that sort of emergence story missing for semantics. But a symbol either means something—in which case, we can always take it to mean something else—or not. Lacking a story about how that sort of thing emerges is exactly the lacuna of the computational theory, no matter how often you claim that since Fodor is really clever, and lots of people believe in computationalism, there’s no way for it to be wrong.

And as for the ‘silliness’ of the homunculus argument as such, which you’ve also tried to leverage against it, just note wikipedia:

Consequently, if computationalism actually depends, as I have claimed, on a homunculus regress for supplying meaning to mental symbols, then either, there must be some way to ground the regress—which is, once again, what I’ve asked for throughout this thread—, or, computationalism is simply wrong (as a theory for explaining all the faculties of the mind).

Let me just make the larger point on which I may not have been sufficiently clear. Computational theories have tremendous explanatory power in providing an account of the processes of cognition. It’s probably no exaggeration to say that most of what has been done in cognitive science in the past fifty years has relied on computational theories of one kind or another. Fodor has said many times that it’s hard to imagine the existence of a cognitive science at all without its computational underpinnings.

To then say that these theories must be all wrong because, according to some, we don’t yet have an adequate account of how symbolic representations acquire their meaning strikes me as astoundingly presumptuous. Sometimes in science, “right” and “wrong” are not even meaningful adjectives to describe a theory; rather, we need to ask whether a theory is useful in explaining phenomena, and whether it has predictive value. In cognitive science, computational theories are very useful indeed.

But because the concept is so pervasive and has such explanatory power, it’s become widely accepted that many aspects of cognition are literally computational and could therefore be exactly instantiated on digital computers. Transhumanists take that many steps further and propose that the mind in its entirety can be thus instantiated on a sufficiently powerful computer, complete with its consciousness. That’s quite a stretch from what cognitive science or neuroscience has so far actually established, but it’s not inconceivable that we might actually be able to do it. It’s also quite conceivable that having done so, we’ll still be arguing (perhaps rather pointlessly) about what consciousness “really” is, or how symbols acquire their meaning.

First of all, I’ve been at pains to try and point out that I’m not saying that it’s ‘all wrong’ to model the brain computationally; I’ve merely said that the fact that it can be so modeled doesn’t imply that it is a computer itself. This doesn’t detract from the successes of cognitive science at all, and it’s only you who seem to believe it does.

Furthermore, it’s not that some people merely claim that there isn’t an explanation of how meaning comes about, it’s that there are arguments to the conclusion that computers can never provide an explanation of that. If these arguments are right, then there is at least one part of the mind not amenable to explanation in terms of computation; likewise, a digital copy of the mind must fall short of the real thing.

These arguments must be addressed before claims to the effect that mind is due to computation can become reasonable. This may be done by either finding a flaw with the arguments themselves, or by demonstrating a counterexample. But it can’t be done by appeal to computationalism as the only game in town, or the stature of its proponents, or the CVs of computer scientists: all of this is just flim-flam, rhetorical chaff thrown up in lieu of a better argument.

Now, I don’t believe these arguments can be countered. I think they’re perfectly decisive, and have the potential to tell us what computation ultimately is. I may be wrong about this; I have been wrong before (for instance, about computationalism being the only remotely reasonable theory of the mind). But if I am wrong, at least I want to see a conclusive argument as to how, and where. I won’t just shut up and fall in line because you tell me that Fodor thinks there’s no alternative to computationalism. I think that sort of attitude is fatal, and has harmed research in many areas (just witness the current storm about ‘post-empirical science’ in high energy physics because people still cling to string theory, despite its failures).

But I also don’t believe that, if these arguments are successful, the past fifty years of cognitive science will just be undone. The discovery of quantum theory has not negated, but completed classical physics. Indeed, many issues that seemed paradoxical on classical physics only received an explanation upon realizing their quantum theoretical foundations. New theories, new paradigms, don’t topple the old to replace them wholesale, but rather, extend what works, build on the successes of the past, and clarify its shortcomings.

I fully expect that this is what will happen in cognitive science. But, in order for it to happen, we must not stick to the paradigm simply because it’s the paradigm; its problems must be met openly, and, if they cannot be dispelled, the fundamental assumptions must be questioned, and perhaps replaced. Otherwise, we face nothing but stagnation. Maybe not now, maybe not for years, but such things can only be postponed so far.

Your arguments may have a certain validity, at least to the extent that there are some who will agree with them, but you’re certainly wrong in characterizing them as a decisive refutation of the computational theory of mind. One need no further evidence of this than the Britannica article I cited earlier, which notes the seemingly intractable difficulty of reconciling the physical and the intentional – of syntax with semantics – yet sees this as no obstacle to a viable theory of computational cognition. Another way of saying this is that your f versus f’ challenge is interesting but irrelevant, as I have tried to point out I don’t know how many times now.

But it’s not just the Britannica article. The aforementioned astounding presumption here seems to be the belief you’re the only one who has thought of this problem, whereas theorists in computational cognition dealt with it long ago. Here, for instance, is a nearly 40-year-old paper back from what may be regarded as the formative days of modern cognitive science, defending a literal view of cognition as computation:
Computation and cognition: issues in the foundations of cognitive science

I would draw your attention specifically to section 3, “Representation and computation”.

I’m tempted once again to quote extensively, but since that wouldn’t be appropriate, let me rehash the reasoning briefly. The paper cites precisely your argument about the addition of two numbers, concluding that in order to view it as arithmetic rather than something else, “we must refer to the meaning of the symbols in the expression and in the printout. These meanings are the referents of the symbols in the domain of numbers. The explanation of why the particular symbol “5” is printed out then follows from these semantic definitions …” [bolding mine].

The argument is then extended to a discussion of the dualism between the intrinsic physical description of a device in terms of its states, in contrast to what it is about, which is a contrast between syntax and semantics. At this point let me make a direct quote of three key paragraphs (bolding mine):
This dual nature of mental functioning (referred to traditionally as the functional or causal, and the intentional) has been a source of profound philosophical puzzlement for a long time (e.g. Putnam 1978). The puzzle arises because, while we believe that people do things because of their goals and beliefs, we nonetheless also assume, for the sake of unity of science and to avoid the extravagance of dualism, that this process is actually carried out by causal sequences of events that can respond only to the intrinsic physical properties of the brain. But how can the process depend both on properties of brain tissue and on some other quite different domain, such as chess or mathematics? The parallel question can of course equally be asked of computers: How can the state transitions in our example depend both on physical laws and on the abstract properties of numbers?

The simple answer is that this happens because both numbers and rules relating numbers are represented in the machine as symbolic expressions and programs, and that it is the physical realization of these representations that determines the machine’s behavior. More precisely, the abstract numbers and rules (e.g. Peano’s axioms) are first expressed in terms of syntactic operations over symbolic expressions or some notation for the number system, and then these expressions are “interpreted” by the built-in functional properties of the physical device. Of course, the machine does not interpret the symbols as numbers, but only as formal patterns that cause the machine to function in some particular way.

Because a computational process has no access to the actual represented domain itself (e.g., a computer has no way of distinguishing whether a symbol represents a number or letter or someone’s name), it is mandatory, if the rules are to continue to be semantically interpretable (say as rules of arithmetic), that all relevant semantic distinctions be mirrored by syntactic distinctions - i.e., by features intrinsic to the representation itself. Such features must in turn be reflected in functional differences in the operation of the device. That is what we mean when we say that a device
represents something. Simply put, all and only syntactically encoded aspects of the represented domain can affect the way a process behaves. This rather obvious assertion is the cornerstone of the formalist approach to understanding the notion of process. Haugeland (1978) has made the same point, though in a slightly different way. It is also implicit in Newell’s (1979) “physical symbol system” hypothesis. Many of the consequences of this characteristic of computation and of this way of looking at cognition are far-reaching, however, and not widely acknowledged [for a discussion, see Fodor, this issue].

It should be noted that the paper explicitly points out precisely your argument of multiple interpretations, saying that “the very same physical state recurs in computers under circumstances in which very different processes, operating in quite different domains of interpretation, are being executed. In other words, the machine’s functioning is completely independent of how its states are interpreted”. This is not, however, a problem provided that the formal features of the syntax exactly map, in a one-to-one correspondence, with the characteristics of some represented domain. This is, indeed, the only possible way to understand computation, because “no one has the slightest notion of how carrying out semantically interpreted rules could even be viewed as compatible with natural law”. So therefore any description of any physical system necessarily has to be in terms of syntactical operations over symbolic representations, by virtue of the fact that it is a physical system, whether that physical system is a computer or a brain.

I finally got round to talking to my daughter about this; she’s a computational biologist, with a doctorate and a job in the field, and everything, and she largely agrees with Half Man Half Wit. Which is a bit disappointing, but there you go. Computation is, in her words, ‘arbitrary’, that is, it can be interpreted in a zillion different ways. She does reckon that a lot of computation goes on in biological systems, and that it can be selected for by evolution, but she says (as someone who creates biological models for a living) that you could never create a complete copy of a human personality in a computer. Apparently there isn’t enough gallium in the world to model a single cell in its entirety.

I still think that a very comprehensive model of a personality should be possible at some time in the distant future, but it would not have true continuity with the original, and if it were conscious at all, it would have the consciousness of an AI, not that of a human. Is that worth doing? Probably; it would be a kind of immortality, but not quite the sort of immortality Kurzweil et al are looking for.

You and I read that article very differently, then. It explicitly notes that, on the question of intentionality, ‘no remotely adequate’ solution has been proposed. If that solution should not be feasible within computationalism—as the triviality arguments suggest—then we’ll have to move beyond it. So computationalism is only feasible (as a full explanation of the mind) if these arguments can be overcome.

I have made copious reference to where my arguments come from. I certainly don’t think of myself as having made a stunning discovery everybody else has missed—rather, as having been persuaded by arguments found frequently (to the present day) in the literature (and formulating my own elaboration of them).

Note that this article predates the attacks on the possibility of uniquely instantiating a computation by Putnam (1988) and Searle (1992). So a lot of what’s said there might have been reasonable then, but would now carry a burden of having to overcome these worries. In particular, it might be reasonable then to hold that it suffices for representation to reflect the syntactical properties of a domain, but that’s precisely what’s under attack in trivialization arguments—in my version, the syntactic properties are explicitly shown to underdetermine the properties of the represented domain.

That said, Pylyshyn is careful not to overstate his case, and defers the question of how symbols are actually supposed to be interpreted:

(Bolding mine.)

Consequently, Pylyshyn leaves the question of whether a unique interpretation of the symbols can be found explicitly open; the triviality arguments, then, aim to show that there will always be an indeterminacy in the choice of such a scheme.

The reference to Haugeland is interesting. There, a procedure is outlined and criteria suggested in order to identify a black box as computing something—concretely, a chess program. Haugeland gives a quasi-empirical account, essentially suggesting that ‘for all practical purposes’, the question can be decided by just watching it play—a stipulated interpretation of the machine’s output will make more sense if it is interpreted as chess moves (in the sense of leading to an adequate chess performance) than, say, stock-market predictions.

The thing is, though, that triviality arguments show precisely this sort of claim to be dubious: from watching the performance of my box, the hypothetical interpretation that casts it in terms of implementing f is exactly as supported as is the interpretation that casts it in terms of implementing f’. Hence, this sort of move doesn’t get around the argument (which, of course, historically led to the development of the strategy against trivialization that e. g. Chalmers follows, namely, the postulation of certain restrictions on what interpretations are supposed to be admissible).

On the semantic view of computation, it is, in fact, these days often straightforwardly admitted that computation is an observer-relative notion. As Shagrir (2006) puts it:

That is, the representational content is taken as primitive—the mental symbols have a certain representational content, and cognitive science merely engages in the business of discovering how that content is manipulated, by virtue of the syntactic manipulations of the symbols themselves, under the assumption that these manipulations are computational in nature.

I have no problems with this perspective. But it falls short of the more grandiose claim that computation explains everything that goes on in the brain: the process by virtue of which mental symbols acquire their representational content is simply left unanalyzed. This strikes me as a methodologically valid move; indeed, possibly the best one available at present. But ideas according to which computation is all that goes on to produce the mind are simply left dangling, without firm footing.

I think HMHW is being too polite in his response to this.

At every turn he is linking the various positions proposed in this thread and their counter arguments to well known philosophers, their papers and to the Stanford Encyclopedia of Philosophy among other places.
Is there anywhere in this thread where he implies that he is the only person who has thought of this problem, or even that the idea is his originally?

That may be true, but that’s not what the current conversation has come to be about, though. According to HMHW’s argument, since every computation requires an interpreter (otherwise he claims, along with Searle et al, that it’s only a trivial syntactic operation on symbols), one cannot speak of any cognitive process as being computational at all, which flies in the face of fifty years of cognitive science research and some of its most important foundational theories.

I tend to agree, though no one really knows, obviously. That was my point in citing the self-trained AlphaGo program, which learned to play expert Go with no human intervention, and whose strategies Go experts have described as “alien” and “like from another dimension”.

We sure did. And that appears to be a recurring theme here, first with the Britannica article, and then the Pylyshyn paper. Correct me if I’m not representing this fairly, but from my perspective, you cherry-pick a quote that cites the difficulties of the computationalist argument, and somehow conclude from this that the article says the opposite of what it actually says.

The Britannica article, for instance, is plainly a discussion of CTM, and specifically the version proposed by Fodor, and its importance in cognitive science. As such, it naturally cites the difficulties with that position, which you take to be insurmountable, and so conclude that it sides with your position. But that’s not what the Britannica article says at all, and I cited it precisely as evidence of how foundationally important CTM has become despite your argument.

This is a good example of that point in a nutshell, and an illustration of how even-handed the article is (bolding mine):
Fodor rightly perceived that something like CRTT, also called** the “computer model of the mind,” is presupposed in an extremely wide range of research in contemporary cognitive psychology, linguistics, artificial intelligence, and philosophy of mind.**

Of course, given the nascent state of many of these disciplines, CRTT is not nearly a finished theory. It is rather a research program, like the proposal in early chemistry that the chemical elements consist of some kind of atoms. Just as early chemists did not have a clue about the complexities that would eventually emerge about the nature of these atoms, so cognitive scientists probably do not have more than very general ideas about the character of the computations and representations that human thought actually involves.** But, as in the case of atomic theory, CRTT seems to be steering research in promising directions.**
I don’t think either Fodor or Pylyshyn would disagree with any of that, including the critical parts. I think it’s quite an accurate assessment.

Except that in the cited paper, ISTM that he does explicitly address all your objections. That an observer is required in order to imbue symbols with the appropriate semantics? Nonsense, he says; all that is required is that the formal features of the syntax exactly map, in a one-to-one correspondence, with the characteristics of some represented domain. That computational states need an observer to fix a unique interpretation? Again, no, one merely accepts that computations can have multiple interpretations, which doesn’t matter as long as the above conditions are met. My version of this has been to say right from the beginning of this argument that it doesn’t matter because all such interpretations are computationally equivalent.

I should mention just as a side note that although Pylyshyn’s views on CTM have evolved over time, he remains to this day a staunch proponent of computationalism as the foundation of cognition, much as the late Jerry Fodor felt the same way to the end, while frankly acknowledging its incompleteness as an explanatory theory for all of human behavior. In fact the paper I cited was later expanded into a book that became regarded as one of the preeminent arguments for CTM (read the description).

But moving on now to your objection to Haugeland’s example, which I take at your word as I haven’t read the paper in question. I note in passing that the chances that an excellent chess-playing program is also making stock-market predictions is vanishingly improbable, which is my argument about the increasing constraints on complex systems, but nevertheless, I understand and accept your box with switches and lights argument at face value. But in those terms, if such a program existed, I would be perfectly happy to use it both to play chess and to make a fortune on the stock market. The two functions would truly be equally valid, due to the incredibly improbable happenstance of the program’s formal syntactical operands having a valid mapping to two completely different problem domains.

Your demand for an example of a computation that uniquely computes f is not even meaningful, because it demands a computation that operates on semantics rather than syntax, which is counterfactual to what a Turing-equivalent computation fundamentally is. The real challenge is to show that this is actually an obstacle to CTM, and there is certainly evidence (repeatedly cited, again just above) that it is not.

Further on the matter of your two functions, one can arbitrarily take the interpretational view that it computes both of them along with an infinity of others, as I already showed, or the computational view that it computes neither, but evinces a behavior that produces light patterns. That you might choose to interpret these lights in particular different ways is irrelevant because the lights are the end product of the totality of what the box does. It’s like arguing that the human mind has exactly the same problem because I can manipulate the beads on an abacus, and the positions of the beads can have arbitrary multiple interpretations. It becomes a silly attempt to extend the true computational final product in a fallacious way. The objective phenomenal consequence here is that in response to cognitive processes I move a bunch of beads around, in the same way that in response to similar cognitive processes I speak and walk and write and do many other things in the physical natural world, some of which may require interpretation according to established common conventions and others of which do not, but all are qualitatively the final end products of my cognition: ** they are where my mind’s computation ends. In many cases, perhaps, they may be the nexus of where someone else’s computation begins, but that’s immaterial to the argument.**

Or to put it more simply, this being a nice warm day, if I ask Jeeves to bring me a gin and tonic, there is no room for interpretation as to the nature of the computation that occurred in Jeeves’ mind when he arrives with the refreshing beverage, and this is true whether Jeeves is a human or a robot.

I have a terrible fear that we may be converging on some kind of agreement here! :smiley:

Still, I have to point out that the idea that cognition is a process that operates on mental symbols with representational content (that is, that the symbols map to semantic concepts) and performs syntactic operations on these representations that are at their core computational, is a very profound and important insight that is at the heart of most of cognitive science today.

But “the gradiose claim that computation explains everything that goes on in the brain” is in fact not a claim that anyone has ever made. I thought I was pretty clear on that from the beginning. To quote Fodor again: “There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition. But it hadn’t occurred to me that anyone could suppose that it’s a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works”.

I really don’t know what to do any more; it’s like I’m shouting into the wind. I’ve been telling you for over 400 posts now that this isn’t what I’m saying.

I’m saying that there is a specific capacity of minds, the interpretation of symbols—intentionality—which isn’t explained by computation. This is a point acknowledged by the Encyclopedia Britannica article, and it’s the issue Pylyshyn explicitly tables. Everything else may well be computational.

Thus, since there is an aspect of mind that isn’t computational, computation can’t explain the mind—completely. But that doesn’t mean that computation has no explanatory utility.

As for this:

The idea is the very foundation of this thread. If what I’m saying is right, then there’s no meaning to downloading one’s consciousness—if, as the Britannica article puts it, ‘the meaning or content of symbols used by ordinary computers is usually derived by stipulation from the intentional states of their programmers’, then that digital copy will have no meaningful internal states of its own. It might shuffle around symbolic vehicles in the same way as my brain did, but they have no more reference than just inert marks on paper.

Well, from my perspective, you gloss over the problematic bits—which everybody is careful to point out are problematic—to insinuate that the problems I have pointed to are, in fact, long solved, irrelevant, or what have you. But the fact is, they’re not; and while opinion may differ on whether they can be, that they are is simply not supported by the current state of the field.

The Britannica article sides with my position in that it doesn’t claim that syntactic manipulations suffice to pin down semantic meaning, which is something you’re trying to say it does. The same goes, incidentally, for the Pylyshyn paper: he says (correctly) that all a computer can react to are distinctions within the semantic content, which must be mapped to syntactic distinctions; i. e. that symbols for ‘dog’ and ‘cat’ must differ, both in form, and in how they’re manipulated. But this doesn’t fix that they mean dog and cat; that dimension is simply irrelevant to the level of syntactic manipulation.

The computation that’s being performed, however, is only fully individuated by specifying this dimension (see the paper by Shagrir I cited).

It doesn’t matter for the level of what a computational system does with those states, no; but it does matter for individuating computations, and producing mental states. Pylyshyn essentially defers dealing with that question, and makes, to my reading, the same move Shagrir proposes—to accept that the symbols in the mind have some definite interpretation, which yields a uniquely specified computation, without considering how this interpretation comes about.

Again, it doesn’t matter for the symbol-manipulating level—but that’s the problem, rather than the solution, for it obviously matters for mental states, which are not open to interpretation. When I compute square roots, or sums, I do that, and only that; and for that, the content of my mental representations must be definite. But that’s explicitly left open.

(Note that I didn’t say that the program could be seen to make stock market predictions, but rather, that one can use the machine’s performance to exclude that hypothesis—the stock market and a game of chess don’t really stand in the same relation—one of structural equivalence—as my distinct functions. But I am saying that there is a huge number of inequivalent computations you can interpret a chess computer as performing, which can be produced from its state diagram in just the way as I have shown.)

But more to the point, if computations with such double meaning exist, then we run right into the problem that they seem to be very different from minds: our thoughts, beliefs, and desires are not open to further interpretation; they’re perfectly definite. If I want a beer, I want a beer, and not any of an equivalence class of objects bearing the same relations to a set of other functional states of my mind.

Again, the obstacle is in the fact that I can mentally instantiate f, i. e. possess mental representations with the unique representants being the elements of f, which relate to one another as those do. If computationalism only gives me the latter, then it fails to explain how I do that.

I wonder if you honestly can’t see that these are in contradiction to one another. If, as you claim, behavior individuates computation, then what computation is being done using the abacus is just the shuffling about of beads; if, on the other hand, I use the abacus to compute something, then my mere shuffling around of beads (i. e. my behavior) does not suffice to pin down the computation I am performing, and neither does Jeeves’ shuffling around the halls of your mansion.

Or, in other words, if I write down the symbols ‘23 + 5 = 28’, then my computation is not exhausted by the production of these symbols, but rather, by operating on the numbers they represent. It’s not the symbols that are being computed, but their meanings—that’s, after all, why we do computations: we want to know what the sum of the numbers 23 and 5 is, not what numerals are output in response to the string of symbols 23 + 5.

Without revisiting yet again the rest of this argumentative quagmire, I just want to highlight a few things that you appear to have misunderstood or otherwise misstated.

But intentionality is absolutely at the core of what cognition fundamentally is! Fodor summed it up neatly in a single sentence (bolding mine): “There are facts about the mind that [computational theory] accounts for and that we would be utterly at a loss to explain without it; and its central idea – that intentional processes are syntactic operations defined on mental representations – is strikingly elegant”.

So it doesn’t matter if you acknowledge that “everything else” about the brain may be computational. It seems to me that if you claim “that there is a specific capacity of minds, the interpretation of symbols—intentionality—which isn’t explained by computation” then it follows that no aspect of cognition can be explained by computation, which is precisely how I characterized your argument. And indeed you’ve been arguing against CTM throughout this thread for just that reason, such as here:
I’ve been presenting a widespread doubt about the computational theory of mind

Yes, that was the original idea of the thread, but then it segued into a broader discussion of CTM, and that’s what I’m defending. My views on uploading the mind or creating a digital consciousness are purely speculative and it’s not something that I or anyone can factually defend.

I’ve never claimed that the problem relating to the semantics of mental representations is “long solved”, “irrelevant”, or anything else of that sort. What I’ve said is that it was recognized as an issue long ago, so this is not a novel argument or a surprise to anyone, but it has not generally been seen as an obstacle to the development of robust and well established computational theories of cognition.