Reply
 
Thread Tools Display Modes
  #501  
Old 06-30-2019, 02:37 PM
GIGObuster's Avatar
GIGObuster is online now
Charter Member
 
Join Date: Jul 2001
Location: Arizona
Posts: 29,229
Quote:
Originally Posted by Half Man Half Wit View Post
But the former is the question relevant to the thread, the latter, as linked to above, ultimately merely concerns the so-called neural correlates of consciousness---which must clearly be there on any naturalistic theory, but whose identification doesn't directly tell us about their connection to things like meaning or experience.
We are also discussing how irrelevant your use of the Chinese Room argument is, but more importantly: the levels of how irrelevant it is.

Again the point was made to show the progress made, as usual it is made only to show that if you were correct then no progress could be found on finding how the brain does it and to show ways on how to replicate it eventually.
  #502  
Old 06-30-2019, 03:48 PM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by GIGObuster View Post
We are also discussing how irrelevant your use of the Chinese Room argument is, but more importantly: the levels of how irrelevant it is.
I've never appealed to the Chinese Room argument. In fact, I consider it to fail, and have pointed that out in this thread.



Quote:
Again the point was made to show the progress made, as usual it is made only to show that if you were correct then no progress could be found on finding how the brain does it and to show ways on how to replicate it eventually.
The research you pointed to stands in no conflict whatsoever with anything I've said.
  #503  
Old 06-30-2019, 04:30 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
I actually agree with your claim that the computational theory seems to be just the sort of thing that would be needed in order to make sense of intentionality. I explained why in a response to RaftPeople earlier on, but right now, I'm just getting gateway timeouts, and can't link to it.
Good, and in fact computationalism is not just an explanation for cognition, it's pretty much the only one we have, and as I keep saying, constitutes a major foundation for cognitive science. But your claim has been that cognitive science is wrong, you've implied that Fodor was just some guy, and, as with other outdated theories, CTM is not only destined to be discredited but already is. Do I really need to go back and find all the pertinent posts?

Quote:
Originally Posted by Half Man Half Wit View Post
Still, it's clear that there is something there to be explained: either, actual phenomenal experience; or, should that not be a valid notion, how we come to think it is. Many think that the latter should be simpler, but I think that's actually quite dubious---even the capacity of being deceived about something seems to require just those mental capabilities that eliminativism seeks to get rid off. But this is really a different kind of discussion from the present one.
There are not a lot of options. The prevailing hypotheses are either that consciousness is an emergent property of minds or that it's an illusion. Both might be true in some sense as a function of perspective. The former implies that it's not a "hard problem" in the Chalmers sense but an easy one, although we have to reconcile that with the problem you mentioned, due to Chalmers and others, that some element of that property must already exist in the constituent components (not that I buy this, but it's a common position). The latter implies that it's not a hard problem but a non-existent one, but that one has to be reconciled with the Descartian paradox that it's being proposed by apparently conscious beings.

Now here's a dude with an entirely different idea. He has three basic tenets: (1) that consciousness is real, (2) that all material things are basically made up of the same kind of stuff, implying a kind of monism, and (3) that emergence of the kind that I proposed isn't possible (the Chalmers view of necessary constituent qualities). The first two seem reasonable, and introducing the third and seeing where that logic inexorably leads us, in this author's view, leads us to panpsychism: the doctrine that all material things possess some element of consciousness!

It's this kind of silliness that leads me to the beliefs that (a) cognition is amenable to a straightforward computational account, and (b) consciousness is an emergent property of cognition. Hence the view that consciousness is computational, and if you don't like it, then you're forced into either the interpretation that consciousness doesn't exist, or panpsychism.

Alternatively one can throw up one's hands and just go and get drunk, in the spirit of this priceless quote from Fodor: "Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness."
  #504  
Old 06-30-2019, 06:55 PM
GIGObuster's Avatar
GIGObuster is online now
Charter Member
 
Join Date: Jul 2001
Location: Arizona
Posts: 29,229
Quote:
Originally Posted by Half Man Half Wit View Post
I've never appealed to the Chinese Room argument. In fact, I consider it to fail, and have pointed that out in this thread.
Only to claim that adding something to it makes it then a better argument and yours, it does not. And when telling wolfpup that Searle was not wrong you are still implying that the box is useful.

From early in the discussion:
Quote:
Originally Posted by Half Man Half Wit View Post
The argument doesn't depend on it. The lookup table issue was simply something I introduced to counter your claim that Watson 'obviously' has semantic competence; but since that's something you've so far only claimed, I can just as well simply reject it.

On the question of whether it's nonsense, this sort of thing is standard in this discussion. Take, for instance, Jack Copeland's discussion of the program 'Superparry' in Artificial Intelligence: A Philosophical Introduction. Superparry is described as a program containing all possible conversations using sentences of up to 100 words length. There's a finite number of those, so we have a huge (but finite) lookup table. Superparry then simply looks for the appropriate inputs (the conversational history up to a given point), and selects one of its possible continuations. Superparry, to Copeland, is an 'obviously unthinking program', and, more importantly, in a discussion of the Chinese Room argument, he claims that 'no one believes that a Superparry-type program qualifies as a Chinese understander'. Which is, of course, exactly what I've been saying.
Of course as pointed before, that is not what researchers look at, adding something to an irrelevant argument is very underwhelming.

Quote:
Originally Posted by Half Man Half Wit View Post
The research you pointed to stands in no conflict whatsoever with anything I've said.
Again, it was only to show that when declaring that "there's no obvious way in which neuron spiking frequencies connect to square roots." You are implying also that there is no way that the brain can also have neurons that can deal with square roots after practice causes places in the brain to do that.

Last edited by GIGObuster; 06-30-2019 at 06:58 PM.
  #505  
Old 07-01-2019, 12:05 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by wolfpup View Post
Good, and in fact computationalism is not just an explanation for cognition, it's pretty much the only one we have, and as I keep saying, constitutes a major foundation for cognitive science.
You're right, it's not an explanation, because in explaining the capacity of interpreting symbols by a mechanism dependent on that very capacity, it explains nothing.

But it's OK. I've been through the grieving process myself when I realized that computationalism couldn't do what it needs to; I guess some of us just spent a little longer in the denial-stage.

And as I keep saying, that nobody has a better idea doesn't mean it's a good one, and ignoring the problems of a paradigm because it's 'the only game in town', even if that were true, is antithetical to making progress.

Quote:
But your claim has been that cognitive science is wrong, you've implied that Fodor was just some guy, and, as with other outdated theories, CTM is not only destined to be discredited but already is. Do I really need to go back and find all the pertinent posts?
If you do, you might find I've not simply claimed that, but provided arguments and citations, instead.

And also, does it never strike you as odd that you feel the philosopher agreeing with you is worthy of more respect, while those that you don't agree with are just 'nitwits'?

Quote:
Now here's a dude with an entirely different idea. He has three basic tenets: (1) that consciousness is real, (2) that all material things are basically made up of the same kind of stuff, implying a kind of monism, and (3) that emergence of the kind that I proposed isn't possible (the Chalmers view of necessary constituent qualities). The first two seem reasonable, and introducing the third and seeing where that logic inexorably leads us, in this author's view, leads us to panpsychism: the doctrine that all material things possess some element of consciousness!
You say that as if it's some sort of reductio of the idea. But while I'm not a proponent of panpsychism, why is the idea that matter possesses brute mental aspects any sillier than the idea that it possesses brute physical aspects? Our best current theories of physics depend on lots of properties whose values just seem to be 'brute facts'; there is no deeper explanation of, say, the ratio of electron to proton mass, or the gauge groups of the standard model; and any proposed deeper explanation just kicks the can further down the road---in string theory, those properties are rooted in the precise way the extra dimensions are compactified, but nobody has any idea where that should come from.

So current physical thinking depends on the existence of physical properties that have no further explanation---they are just so. But then, what makes the existence of mental, or protomental, properties that are 'just so' any more silly?

Last edited by Half Man Half Wit; 07-01-2019 at 12:06 AM.
  #506  
Old 07-01-2019, 01:19 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
You're right, it's not an explanation, because in explaining the capacity of interpreting symbols by a mechanism dependent on that very capacity, it explains nothing.
That is obviously not what I said, of course, so I take that to be a sarcastic riff.

So let's summarize where we are on this. You conceptualized a box with switches and lights and argued that there are multiple fanciful ways of interpreting its output other than binary addition. On the basis of this tenuous argument, which I showed you multiple times and with multiple examples matters not in the least, you nevertheless insist that it invalidates arguably the most fundamental theory of cognition of the past hundred years and the foundation of modern cognitive science. To accept your argument one must accept that Fodor -- which the IEP called the most important philosopher of mind of the late twentieth and early twenty-first centuries -- made a high-school level logical error in proposing his representational theory of mind.

Let's see what the Encyclopedia Britannica has to say on the subject. I extracted a couple of the most relevant snippets.
The idea that thinking and mental processes in general can be treated as computational processes emerged gradually in the work of the computer scientists Allen Newell and Herbert Simon and the philosophers Hilary Putnam, Gilbert Harman, and especially Jerry Fodor. Fodor was the most explicit and influential advocate of the computational-representational theory of thought, or CRTT—the idea that thinking consists of the manipulation of electronic tokens of sentences in a “language of thought.”
...
One of Turing’s achievements was to show how computations can be specified purely mechanically, in particular without any reference to the meanings of the symbols over which the computations are defined. Contrary to the assertions of some of CRTT’s critics, notably the American philosopher John Searle, specifying computations without reference to the meanings of symbols does not imply that the symbols do not have any meaning ...
...
Homunculi
Another frequent objection against theories like CRTT, originally voiced by Wittgenstein and Ryle, is that they merely reproduce the problems they are supposed to solve, since they invariably posit processes—such as following rules or comparing one thing with another—that seem to require the very kind of intelligence that the theory is supposed to explain ...

This objection might be a problem for a theory such as Freud’s, which posits entities such as the superego and processes such as the unconscious repression of desires. It is not a problem, however, for CRTT, because the central idea behind the development of the theory is Turing’s characterization of computation in terms of the purely mechanical steps of a Turing machine. These steps, such as moving left or right one cell at a time, are so simple and “stupid” that they can obviously be executed without the need of any intelligence at all.
https://www.britannica.com/topic/phi...f-thought-CRTT

Last edited by wolfpup; 07-01-2019 at 01:21 PM.
  #507  
Old 07-02-2019, 12:20 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by wolfpup View Post
So let's summarize where we are on this. You conceptualized a box with switches and lights and argued that there are multiple fanciful ways of interpreting its output other than binary addition.
I have shown that these other functions are exactly as reasonably computed by the system as binary addition, so that if you want to hold that the device computes binary addition---which is the same claim as, say, that a pocket calculator computes square roots---, you must also accept these other functions as being computed by the box. Otherwise, you're left with a notion of computation according to which it is merely the physical behavior of the system, which is not a notion that can explain how anybody ever computes a square root.

In response to this, as best as I can tell, your strategy was to either claim that there's somehow a unique way in which binary addition is uniquely computed by the system, or that for such simple systems, their behavior is indeed all that they compute, but that once things get nebulously complicated, semantics just kinda happens. Which of the two depends on what you intend for the table you consider to be the uniquely right one to mean---whether it's supposed to represent the function of binary addition, or merely the physical evolution of the system.

As for the emergence of semantics, so far, your argument has merely been 'but Watson does it'---which is just an unsubstantiated claim (no, pointing to the lead researcher's CV does not constitute substantiation). Since my argument entails that no, Watson doesn't do it, that mere claim isn't going to be terribly convincing; there are, after all, many ways of 'faking' semantic understanding with all sorts of trickery.

Quote:
To accept your argument one must accept that Fodor -- which the IEP called the most important philosopher of mind of the late twentieth and early twenty-first centuries -- made a high-school level logical error in proposing his representational theory of mind.
I still don't see how you intend to settle this matter with popularity contests, but fine. Fodor is out-cited by Searle by a fair margin, although of course a lot of these cites might just be people pointing out how he's wrong (although they wouldn't bother if he wasn't important). Putnam makes the list of 'most important philosophers of the last 200 years', Fodor doesn't (although in fairness, Fodor tops the list of most important philosophers of mind since WWII). Among the 43 most important philosophy books between 1950 and 2000, Putnam has one, Searle has two, and Fodor no entries. Searle's Rediscovery of the Mind contains the view sometimes called interpretivist pancomputationalism---i. e. that every physical system can be interpreted as computing in multiple ways.

So not only are the people championing the view that the notion of computation is trivialized in the manner I have outlined as highly regarded as Fodor, that very view is contained among some of the most highly influential contemporary philosophical works. So since one of these must be wrong, one of them must have committed a 'high-school level logical error', we differ merely on which one. (Of course, claiming it to be such an error is highly tendentious; the correct way of viewing it is just that it's a difficult matter with many unsettled issues.)

Quote:

One of Turing’s achievements was to show how computations can be specified purely mechanically, in particular without any reference to the meanings of the symbols over which the computations are defined. Contrary to the assertions of some of CRTT’s critics, notably the American philosopher John Searle, specifying computations without reference to the meanings of symbols does not imply that the symbols do not have any meaning ...
It looks like you accidentally snipped a kinda relevant part there:

Quote:
Originally Posted by Encyclopedia Britannica
But, as already noted, the meaning or content of symbols used by ordinary computers is usually derived by stipulation from the intentional states of their programmers. In contrast, the symbols involved in human mental activity presumably have intrinsic meaning or intentionality. The real problem for CRTT, therefore, is how to explain the intrinsic meaning or intentionality of symbols in the brain.

This is really just an instance of the general problem already noted of filling the explanatory gap between the physical and the intentional—the problem of answering the challenge raised by Brentano’s thesis. No remotely adequate proposal has yet been made [...]
Quote:
Originally Posted by wolfpup View Post
Homunculi
Another frequent objection against theories like CRTT, originally voiced by Wittgenstein and Ryle, is that they merely reproduce the problems they are supposed to solve, since they invariably posit processes—such as following rules or comparing one thing with another—that seem to require the very kind of intelligence that the theory is supposed to explain ...
Note that this is a very different objection from the one I propose, though, similar only in leading to the same sort of logical regress.
  #508  
Old 07-02-2019, 12:32 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Seems I forgot the link to the list of books above---here it is: http://www.openculture.com/2018/04/e...1950-2000.html
  #509  
Old 07-02-2019, 07:30 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
... Putnam[/URL] makes the list of 'most important philosophers of the last 200 years', Fodor doesn't (although in fairness, Fodor tops the list of most important philosophers of mind since WWII).
And this is in some way different than my quote from IEP about Fodor being "the most important philosopher of mind of the late 20th and early 21st centuries"?
Quote:
Originally Posted by Half Man Half Wit View Post
(Of course, claiming it to be such an error is highly tendentious; the correct way of viewing it is just that it's a difficult matter with many unsettled issues.)
I agree that the debate is all of those things. My major claim is how well-established CTM is in cognitive science today as one of its fundamental foundations, whereas your claim is that CTM is just plain wrong and someday the fools will see the error of their ways, which is a strange claim considering your correct statement above that it's a profoundly complicated controversy.
Quote:
Originally Posted by Half Man Half Wit View Post
It looks like you accidentally snipped a kinda relevant part there:
I can't imagine why on earth you would think that snippet was in any way relevant. Recognizing that the SDMB discourages extensive quotations for copyright reasons, and for general succinctness, I tried to keep it as brief and on-point as possible. That particular snippet neither supports nor refutes my argument; it's just not relevant either way. Fodor himself would have been the first to acknowledge that his representational theory of mind was incomplete, and indeed in his introduction to The Mind Doesn't Work That Way, he expressed surprise that anyone could have thought otherwise, because it turned out that a lot of people did.

There is, however, a very large difference between "incomplete" and "wrong", in the sense of "not even remotely the same thing", and I would encourage you to appreciate that distinction.
Quote:
Originally Posted by Half Man Half Wit View Post
Note that this is a very different objection from the one I propose, though, similar only in leading to the same sort of logical regress.
Sure seems the same to me in basic principle, even if cast in somewhat different terms, but fine -- if you want to claim it's a different objection, it makes no difference, because the stated refutation of the objection is exactly the same in both cases, namely that a fundamental premise of CTM is that cognition can be modeled on a Turing machine in the sense of being syntactic operations on symbolic representations. Fodor made this premise explicit.
  #510  
Old 07-03-2019, 12:07 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by wolfpup View Post
And this is in some way different than my quote from IEP about Fodor being "the most important philosopher of mind of the late 20th and early 21st centuries"?
I did not intend to contest that claim, but merely to show its irrelevance. Yes, Fodor is an important philosopher; but that doesn't entail he can't be mistaken: Searle and Putnam are also important philosophers (moreso than Fodor, depending on the metric), and authors of some of the most important contemporary philosophical texts, and they share my opposition to his central thesis. That doesn't tell us that Fodor must be wrong, but it does tell us that an appeal to his stature as a philosopher---as should, really, be obvious---simply doesn't help settle the issue. Even clever people can be wrong.

Quote:
I agree that the debate is all of those things. My major claim is how well-established CTM is in cognitive science today as one of its fundamental foundations, whereas your claim is that CTM is just plain wrong and someday the fools will see the error of their ways, which is a strange claim considering your correct statement above that it's a profoundly complicated controversy.
I don't know why it should be strange. Simple matters can lead to complex debates; it happens all the time. Phlogiston is a simple matter, so is the miasma theory of disease; both where the dominant paradigm for a long time, with many incredibly brilliant adherents; yet, neither their well-established nature, nor the cleverness of their defenders prevented them from being wrong.

Quote:
I can't imagine why on earth you would think that snippet was in any way relevant. Recognizing that the SDMB discourages extensive quotations for copyright reasons, and for general succinctness, I tried to keep it as brief and on-point as possible.
Well, to somebody not familiar with the debate, the part you quoted without context might seem to say that there's no trouble at all with symbolic meaning in the case of computation---Turing settled the matter, rote syntactic manipulation is all that needs to be present for the symbols present in computations to have meaning. Whereas the immediately following bit that I provided points out that that's not, in fact, sufficient if one wants to explain the mind: the meanings of the symbols in ordinary computation are derived from their programmers' intentions; which is something you can't appeal to in trying to explain the mind.

How the symbols that are used in mental computation acquire their meanings is then just not explained at all by the computational theory of mind---which is nothing but the point I've been making over and over again.

Quote:
There is, however, a very large difference between "incomplete" and "wrong", in the sense of "not even remotely the same thing", and I would encourage you to appreciate that distinction.
The theory may self-admittedly be incomplete, but there are many that claim (or implicitly assume) that it can be completed, in order to yield a full explanation of the mind. It's that claim which I think is wrong, and which my argument attacks: computation does not suffice for mind, precisely because it can't account for the intrinsic intentionality of mental symbols.

After all, that the mind is wholly computational is the premise of this thread---otherwise, you couldn't well 'download' it.

Quote:
Sure seems the same to me in basic principle, even if cast in somewhat different terms, but fine -- if you want to claim it's a different objection, it makes no difference, because the stated refutation of the objection is exactly the same in both cases, namely that a fundamental premise of CTM is that cognition can be modeled on a Turing machine in the sense of being syntactic operations on symbolic representations. Fodor made this premise explicit.
This does not work as a refutation of my argument, because, as the snippet of the Encyclopedia Britannica-article I quoted makes clear, 'no remotely adequate proposal has yet been made' regarding how the symbols of the mind acquire their meaning. But this is exactly what I've been asking for (with my challenge to uniquely implement f---that is, a computation such that the symbolic vehicles used uniquely possess a specific meaning). If my argument is thus right, the premise that cognition can be modeled by a TM is simply wrong.

Also, the strategy the article proclaims may work for something like the rule-following paradox (although to be honest, I don't think it's quite that simple), because rule following behavior may straightforwardly emerge from non-rule following constituents---the large-scale behavior of systems with random constituents may follow effective rules, and thus, we get just that sort of emergence story missing for semantics. But a symbol either means something---in which case, we can always take it to mean something else---or not. Lacking a story about how that sort of thing emerges is exactly the lacuna of the computational theory, no matter how often you claim that since Fodor is really clever, and lots of people believe in computationalism, there's no way for it to be wrong.

And as for the 'silliness' of the homunculus argument as such, which you've also tried to leverage against it, just note wikipedia:
Quote:
Homunculus arguments are always fallacious unless some way can be found to 'ground' the regress. In psychology and philosophy of mind, 'homunculus arguments' (or the 'homunculus fallacies') are extremely useful for detecting where theories of mind fail or are incomplete.
Consequently, if computationalism actually depends, as I have claimed, on a homunculus regress for supplying meaning to mental symbols, then either, there must be some way to ground the regress---which is, once again, what I've asked for throughout this thread---, or, computationalism is simply wrong (as a theory for explaining all the faculties of the mind).
  #511  
Old 07-03-2019, 05:48 AM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
I don't know why it should be strange. Simple matters can lead to complex debates; it happens all the time. Phlogiston is a simple matter, so is the miasma theory of disease; both where the dominant paradigm for a long time, with many incredibly brilliant adherents; yet, neither their well-established nature, nor the cleverness of their defenders prevented them from being wrong.


Well, to somebody not familiar with the debate, the part you quoted without context might seem to say that there's no trouble at all with symbolic meaning in the case of computation---Turing settled the matter, rote syntactic manipulation is all that needs to be present for the symbols present in computations to have meaning. Whereas the immediately following bit that I provided points out that that's not, in fact, sufficient if one wants to explain the mind: the meanings of the symbols in ordinary computation are derived from their programmers' intentions; which is something you can't appeal to in trying to explain the mind.
Let me just make the larger point on which I may not have been sufficiently clear. Computational theories have tremendous explanatory power in providing an account of the processes of cognition. It's probably no exaggeration to say that most of what has been done in cognitive science in the past fifty years has relied on computational theories of one kind or another. Fodor has said many times that it's hard to imagine the existence of a cognitive science at all without its computational underpinnings.

To then say that these theories must be all wrong because, according to some, we don't yet have an adequate account of how symbolic representations acquire their meaning strikes me as astoundingly presumptuous. Sometimes in science, "right" and "wrong" are not even meaningful adjectives to describe a theory; rather, we need to ask whether a theory is useful in explaining phenomena, and whether it has predictive value. In cognitive science, computational theories are very useful indeed.

But because the concept is so pervasive and has such explanatory power, it's become widely accepted that many aspects of cognition are literally computational and could therefore be exactly instantiated on digital computers. Transhumanists take that many steps further and propose that the mind in its entirety can be thus instantiated on a sufficiently powerful computer, complete with its consciousness. That's quite a stretch from what cognitive science or neuroscience has so far actually established, but it's not inconceivable that we might actually be able to do it. It's also quite conceivable that having done so, we'll still be arguing (perhaps rather pointlessly) about what consciousness "really" is, or how symbols acquire their meaning.
  #512  
Old 07-03-2019, 09:58 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by wolfpup View Post
To then say that these theories must be all wrong because, according to some, we don't yet have an adequate account of how symbolic representations acquire their meaning strikes me as astoundingly presumptuous.
First of all, I've been at pains to try and point out that I'm not saying that it's 'all wrong' to model the brain computationally; I've merely said that the fact that it can be so modeled doesn't imply that it is a computer itself. This doesn't detract from the successes of cognitive science at all, and it's only you who seem to believe it does.

Furthermore, it's not that some people merely claim that there isn't an explanation of how meaning comes about, it's that there are arguments to the conclusion that computers can never provide an explanation of that. If these arguments are right, then there is at least one part of the mind not amenable to explanation in terms of computation; likewise, a digital copy of the mind must fall short of the real thing.

These arguments must be addressed before claims to the effect that mind is due to computation can become reasonable. This may be done by either finding a flaw with the arguments themselves, or by demonstrating a counterexample. But it can't be done by appeal to computationalism as the only game in town, or the stature of its proponents, or the CVs of computer scientists: all of this is just flim-flam, rhetorical chaff thrown up in lieu of a better argument.

Now, I don't believe these arguments can be countered. I think they're perfectly decisive, and have the potential to tell us what computation ultimately is. I may be wrong about this; I have been wrong before (for instance, about computationalism being the only remotely reasonable theory of the mind). But if I am wrong, at least I want to see a conclusive argument as to how, and where. I won't just shut up and fall in line because you tell me that Fodor thinks there's no alternative to computationalism. I think that sort of attitude is fatal, and has harmed research in many areas (just witness the current storm about 'post-empirical science' in high energy physics because people still cling to string theory, despite its failures).

But I also don't believe that, if these arguments are successful, the past fifty years of cognitive science will just be undone. The discovery of quantum theory has not negated, but completed classical physics. Indeed, many issues that seemed paradoxical on classical physics only received an explanation upon realizing their quantum theoretical foundations. New theories, new paradigms, don't topple the old to replace them wholesale, but rather, extend what works, build on the successes of the past, and clarify its shortcomings.

I fully expect that this is what will happen in cognitive science. But, in order for it to happen, we must not stick to the paradigm simply because it's the paradigm; its problems must be met openly, and, if they cannot be dispelled, the fundamental assumptions must be questioned, and perhaps replaced. Otherwise, we face nothing but stagnation. Maybe not now, maybe not for years, but such things can only be postponed so far.
  #513  
Old 07-03-2019, 01:15 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
These arguments must be addressed before claims to the effect that mind is due to computation can become reasonable ...

... Now, I don't believe these arguments can be countered. I think they're perfectly decisive, and have the potential to tell us what computation ultimately is. I may be wrong about this; I have been wrong before (for instance, about computationalism being the only remotely reasonable theory of the mind). But if I am wrong, at least I want to see a conclusive argument as to how, and where. I won't just shut up and fall in line because you tell me that Fodor thinks there's no alternative to computationalism. I think that sort of attitude is fatal, and has harmed research in many areas (just witness the current storm about 'post-empirical science' in high energy physics because people still cling to string theory, despite its failures).
Your arguments may have a certain validity, at least to the extent that there are some who will agree with them, but you're certainly wrong in characterizing them as a decisive refutation of the computational theory of mind. One need no further evidence of this than the Britannica article I cited earlier, which notes the seemingly intractable difficulty of reconciling the physical and the intentional -- of syntax with semantics -- yet sees this as no obstacle to a viable theory of computational cognition. Another way of saying this is that your f versus f' challenge is interesting but irrelevant, as I have tried to point out I don't know how many times now.

But it's not just the Britannica article. The aforementioned astounding presumption here seems to be the belief you're the only one who has thought of this problem, whereas theorists in computational cognition dealt with it long ago. Here, for instance, is a nearly 40-year-old paper back from what may be regarded as the formative days of modern cognitive science, defending a literal view of cognition as computation:
Computation and cognition: issues in the foundations of cognitive science

I would draw your attention specifically to section 3, "Representation and computation".

I'm tempted once again to quote extensively, but since that wouldn't be appropriate, let me rehash the reasoning briefly. The paper cites precisely your argument about the addition of two numbers, concluding that in order to view it as arithmetic rather than something else, "we must refer to the meaning of the symbols in the expression and in the printout. These meanings are the referents of the symbols in the domain of numbers. The explanation of why the particular symbol "5" is printed out then follows from these semantic definitions ..." [bolding mine].

The argument is then extended to a discussion of the dualism between the intrinsic physical description of a device in terms of its states, in contrast to what it is about, which is a contrast between syntax and semantics. At this point let me make a direct quote of three key paragraphs (bolding mine):
This dual nature of mental functioning (referred to traditionally as the functional or causal, and the intentional) has been a source of profound philosophical puzzlement for a long time (e.g. Putnam 1978). The puzzle arises because, while we believe that people do things because of their goals and beliefs, we nonetheless also assume, for the sake of unity of science and to avoid the extravagance of dualism, that this process is actually carried out by causal sequences of events that can respond only to the intrinsic physical properties of the brain. But how can the process depend both on properties of brain tissue and on some other quite different domain, such as chess or mathematics? The parallel question can of course equally be asked of computers: How can the state transitions in our example depend both on physical laws and on the abstract properties of numbers?

The simple answer is that this happens because both numbers and rules relating numbers are represented in the machine as symbolic expressions and programs, and that it is the physical realization of these representations that determines the machine's behavior. More precisely, the abstract numbers and rules (e.g. Peano's axioms) are first expressed in terms of syntactic operations over symbolic expressions or some notation for the number system, and then these expressions are "interpreted" by the built-in functional properties of the physical device. Of course, the machine does not interpret the symbols as numbers, but only as formal patterns that cause the machine to function in some particular way.

Because a computational process has no access to the actual represented domain itself (e.g., a computer has no way of distinguishing whether a symbol represents a number or letter or someone's name), it is mandatory, if the rules are to continue to be semantically interpretable (say as rules of arithmetic), that all relevant semantic distinctions be mirrored by syntactic distinctions - i.e., by features intrinsic to the representation itself. Such features must in turn be reflected in functional differences in the operation of the device. That is what we mean when we say that a device
represents something. Simply put, all and only syntactically encoded aspects of the represented domain can affect the way a process behaves. This rather obvious assertion is the cornerstone of the formalist approach to understanding the notion of process. Haugeland (1978) has made the same point, though in a slightly different way. It is also implicit in Newell's (1979) "physical symbol system" hypothesis. Many of the consequences of this characteristic of computation and of this way of looking at cognition are far-reaching, however, and not widely acknowledged [for a discussion, see Fodor, this issue].
It should be noted that the paper explicitly points out precisely your argument of multiple interpretations, saying that "the very same physical state recurs in computers under circumstances in which very different processes, operating in quite different domains of interpretation, are being executed. In other words, the machine's functioning is completely independent of how its states are interpreted". This is not, however, a problem provided that the formal features of the syntax exactly map, in a one-to-one correspondence, with the characteristics of some represented domain. This is, indeed, the only possible way to understand computation, because "no one has the slightest notion of how carrying out semantically interpreted rules could even be viewed as compatible with natural law". So therefore any description of any physical system necessarily has to be in terms of syntactical operations over symbolic representations, by virtue of the fact that it **is** a physical system, whether that physical system is a computer or a brain.

Last edited by wolfpup; 07-03-2019 at 01:20 PM.
  #514  
Old 07-04-2019, 05:37 AM
eburacum45 is online now
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,882
I finally got round to talking to my daughter about this; she's a computational biologist, with a doctorate and a job in the field, and everything, and she largely agrees with Half Man Half Wit. Which is a bit disappointing, but there you go. Computation is, in her words, 'arbitrary', that is, it can be interpreted in a zillion different ways. She does reckon that a lot of computation goes on in biological systems, and that it can be selected for by evolution, but she says (as someone who creates biological models for a living) that you could never create a complete copy of a human personality in a computer. Apparently there isn't enough gallium in the world to model a single cell in its entirety.

I still think that a very comprehensive model of a personality should be possible at some time in the distant future, but it would not have true continuity with the original, and if it were conscious at all, it would have the consciousness of an AI, not that of a human. Is that worth doing? Probably; it would be a kind of immortality, but not quite the sort of immortality Kurzweil et al are looking for.
  #515  
Old 07-04-2019, 05:41 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by wolfpup View Post
Your arguments may have a certain validity, at least to the extent that there are some who will agree with them, but you're certainly wrong in characterizing them as a decisive refutation of the computational theory of mind. One need no further evidence of this than the Britannica article I cited earlier, which notes the seemingly intractable difficulty of reconciling the physical and the intentional -- of syntax with semantics -- yet sees this as no obstacle to a viable theory of computational cognition.
You and I read that article very differently, then. It explicitly notes that, on the question of intentionality, 'no remotely adequate' solution has been proposed. If that solution should not be feasible within computationalism---as the triviality arguments suggest---then we'll have to move beyond it. So computationalism is only feasible (as a full explanation of the mind) if these arguments can be overcome.

Quote:
But it's not just the Britannica article. The aforementioned astounding presumption here seems to be the belief you're the only one who has thought of this problem, whereas theorists in computational cognition dealt with it long ago.
I have made copious reference to where my arguments come from. I certainly don't think of myself as having made a stunning discovery everybody else has missed---rather, as having been persuaded by arguments found frequently (to the present day) in the literature (and formulating my own elaboration of them).

Quote:
Here, for instance, is a nearly 40-year-old paper back from what may be regarded as the formative days of modern cognitive science, defending a literal view of cognition as computation:
Note that this article predates the attacks on the possibility of uniquely instantiating a computation by Putnam (1988) and Searle (1992). So a lot of what's said there might have been reasonable then, but would now carry a burden of having to overcome these worries. In particular, it might be reasonable then to hold that it suffices for representation to reflect the syntactical properties of a domain, but that's precisely what's under attack in trivialization arguments---in my version, the syntactic properties are explicitly shown to underdetermine the properties of the represented domain.

That said, Pylyshyn is careful not to overstate his case, and defers the question of how symbols are actually supposed to be interpreted:

Quote:
More precisely, the privileged vocabulary claim asserts that there is a natural and reasonably well-defined domain of questions that can be answered solely by examining 1) a canonical description of an algorithm (or a program in some suitable language - where the latter remains to be specified), and 2) a system of formal symbols (data structures, expressions), together with what Haugeland (1978) calls a "regular scheme of interpretation" for interpreting these symbols as expressing the representational content of mental states (i.e., as expressing what the beliefs, goals, thoughts, and the like are about, or what they represent). Notice that a number of issues have been left unresolved in the above formulation. For example, the notion of a canonical description of an algorithm is left open. We shall return to this question in section 8. Also,
we have not said anything about the scheme for interpreting the symbols - for example, whether there is any indeterminacy in the choice of such a scheme or whether it can be uniquely constrained by empirical considerations (such as those arising from the necessity of causally relating representations to the environment through transducers). This question will not be raised here, although it is a widely debated issue on which a considerable literature exists (e.g. Putnam 1978)
(Bolding mine.)

Consequently, Pylyshyn leaves the question of whether a unique interpretation of the symbols can be found explicitly open; the triviality arguments, then, aim to show that there will always be an indeterminacy in the choice of such a scheme.

The reference to Haugeland is interesting. There, a procedure is outlined and criteria suggested in order to identify a black box as computing something---concretely, a chess program. Haugeland gives a quasi-empirical account, essentially suggesting that 'for all practical purposes', the question can be decided by just watching it play---a stipulated interpretation of the machine's output will make more sense if it is interpreted as chess moves (in the sense of leading to an adequate chess performance) than, say, stock-market predictions.

The thing is, though, that triviality arguments show precisely this sort of claim to be dubious: from watching the performance of my box, the hypothetical interpretation that casts it in terms of implementing f is exactly as supported as is the interpretation that casts it in terms of implementing f'. Hence, this sort of move doesn't get around the argument (which, of course, historically led to the development of the strategy against trivialization that e. g. Chalmers follows, namely, the postulation of certain restrictions on what interpretations are supposed to be admissible).

On the semantic view of computation, it is, in fact, these days often straightforwardly admitted that computation is an observer-relative notion. As Shagrir (2006) puts it:
Quote:
That being a computer is a matter of perspective does not entail that computational cognitive science (neuroscience) has no empirical content. In particular, it is consistent with their discovering (a) the representational contents of the brain—which entities it represents, and (b) the operations that are performed over these representations. It might well be, therefore, that cognitive (and brain) science seeks to discover “the computational structure of the brain”: the implemented abstract structure that is defined over the mental (or cerebral) representations.
That is, the representational content is taken as primitive---the mental symbols have a certain representational content, and cognitive science merely engages in the business of discovering how that content is manipulated, by virtue of the syntactic manipulations of the symbols themselves, under the assumption that these manipulations are computational in nature.

I have no problems with this perspective. But it falls short of the more grandiose claim that computation explains everything that goes on in the brain: the process by virtue of which mental symbols acquire their representational content is simply left unanalyzed. This strikes me as a methodologically valid move; indeed, possibly the best one available at present. But ideas according to which computation is all that goes on to produce the mind are simply left dangling, without firm footing.

Last edited by Half Man Half Wit; 07-04-2019 at 05:43 AM.
  #516  
Old 07-04-2019, 11:22 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by wolfpup View Post
The aforementioned astounding presumption here seems to be the belief you're the only one who has thought of this problem, whereas theorists in computational cognition dealt with it long ago.
I think HMHW is being too polite in his response to this.

At every turn he is linking the various positions proposed in this thread and their counter arguments to well known philosophers, their papers and to the Stanford Encyclopedia of Philosophy among other places.


Is there anywhere in this thread where he implies that he is the only person who has thought of this problem, or even that the idea is his originally?
  #517  
Old 07-04-2019, 01:43 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by eburacum45 View Post
...but she says (as someone who creates biological models for a living) that you could never create a complete copy of a human personality in a computer.
That may be true, but that's not what the current conversation has come to be about, though. According to HMHW's argument, since every computation requires an interpreter (otherwise he claims, along with Searle et al, that it's only a trivial syntactic operation on symbols), one cannot speak of any cognitive process as being computational at all, which flies in the face of fifty years of cognitive science research and some of its most important foundational theories.
Quote:
Originally Posted by eburacum45 View Post
I still think that a very comprehensive model of a personality should be possible at some time in the distant future, but it would not have true continuity with the original, and if it were conscious at all, it would have the consciousness of an AI, not that of a human.
I tend to agree, though no one really knows, obviously. That was my point in citing the self-trained AlphaGo program, which learned to play expert Go with no human intervention, and whose strategies Go experts have described as "alien" and "like from another dimension".
Quote:
Originally Posted by Half Man Half Wit View Post
You and I read that article very differently, then.
We sure did. And that appears to be a recurring theme here, first with the Britannica article, and then the Pylyshyn paper. Correct me if I'm not representing this fairly, but from my perspective, you cherry-pick a quote that cites the difficulties of the computationalist argument, and somehow conclude from this that the article says the opposite of what it actually says.

The Britannica article, for instance, is plainly a discussion of CTM, and specifically the version proposed by Fodor, and its importance in cognitive science. As such, it naturally cites the difficulties with that position, which you take to be insurmountable, and so conclude that it sides with your position. But that's not what the Britannica article says at all, and I cited it precisely as evidence of how foundationally important CTM has become despite your argument.

This is a good example of that point in a nutshell, and an illustration of how even-handed the article is (bolding mine):
Fodor rightly perceived that something like CRTT, also called the “computer model of the mind,” is presupposed in an extremely wide range of research in contemporary cognitive psychology, linguistics, artificial intelligence, and philosophy of mind.

Of course, given the nascent state of many of these disciplines, CRTT is not nearly a finished theory. It is rather a research program, like the proposal in early chemistry that the chemical elements consist of some kind of atoms. Just as early chemists did not have a clue about the complexities that would eventually emerge about the nature of these atoms, so cognitive scientists probably do not have more than very general ideas about the character of the computations and representations that human thought actually involves. But, as in the case of atomic theory, CRTT seems to be steering research in promising directions.
I don't think either Fodor or Pylyshyn would disagree with any of that, including the critical parts. I think it's quite an accurate assessment.

Quote:
Originally Posted by Half Man Half Wit View Post
Consequently, Pylyshyn leaves the question of whether a unique interpretation of the symbols can be found explicitly open; the triviality arguments, then, aim to show that there will always be an indeterminacy in the choice of such a scheme.

The reference to Haugeland is interesting. There, a procedure is outlined and criteria suggested in order to identify a black box as computing something---concretely, a chess program. Haugeland gives a quasi-empirical account, essentially suggesting that 'for all practical purposes', the question can be decided by just watching it play---a stipulated interpretation of the machine's output will make more sense if it is interpreted as chess moves (in the sense of leading to an adequate chess performance) than, say, stock-market predictions.

The thing is, though, that triviality arguments show precisely this sort of claim to be dubious: from watching the performance of my box, the hypothetical interpretation that casts it in terms of implementing f is exactly as supported as is the interpretation that casts it in terms of implementing f'.
Except that in the cited paper, ISTM that he does explicitly address all your objections. That an observer is required in order to imbue symbols with the appropriate semantics? Nonsense, he says; all that is required is that the formal features of the syntax exactly map, in a one-to-one correspondence, with the characteristics of some represented domain. That computational states need an observer to fix a unique interpretation? Again, no, one merely accepts that computations can have multiple interpretations, which doesn't matter as long as the above conditions are met. My version of this has been to say right from the beginning of this argument that it doesn't matter because all such interpretations are computationally equivalent.

I should mention just as a side note that although Pylyshyn's views on CTM have evolved over time, he remains to this day a staunch proponent of computationalism as the foundation of cognition, much as the late Jerry Fodor felt the same way to the end, while frankly acknowledging its incompleteness as an explanatory theory for all of human behavior. In fact the paper I cited was later expanded into a book that became regarded as one of the preeminent arguments for CTM (read the description).

But moving on now to your objection to Haugeland's example, which I take at your word as I haven't read the paper in question. I note in passing that the chances that an excellent chess-playing program is also making stock-market predictions is vanishingly improbable, which is my argument about the increasing constraints on complex systems, but nevertheless, I understand and accept your box with switches and lights argument at face value. But in those terms, if such a program existed, I would be perfectly happy to use it both to play chess and to make a fortune on the stock market. The two functions would truly be equally valid, due to the incredibly improbable happenstance of the program's formal syntactical operands having a valid mapping to two completely different problem domains.

Your demand for an example of a computation that uniquely computes f is not even meaningful, because it demands a computation that operates on semantics rather than syntax, which is counterfactual to what a Turing-equivalent computation fundamentally is. The real challenge is to show that this is actually an obstacle to CTM, and there is certainly evidence (repeatedly cited, again just above) that it is not.

Further on the matter of your two functions, one can arbitrarily take the interpretational view that it computes both of them along with an infinity of others, as I already showed, or the computational view that it computes neither, but evinces a behavior that produces light patterns. That you might choose to interpret these lights in particular different ways is irrelevant because the lights are the end product of the totality of what the box does. It's like arguing that the human mind has exactly the same problem because I can manipulate the beads on an abacus, and the positions of the beads can have arbitrary multiple interpretations. It becomes a silly attempt to extend the true computational final product in a fallacious way. The objective phenomenal consequence here is that in response to cognitive processes I move a bunch of beads around, in the same way that in response to similar cognitive processes I speak and walk and write and do many other things in the physical natural world, some of which may require interpretation according to established common conventions and others of which do not, but all are qualitatively the final end products of my cognition: they are where my mind's computation ends. In many cases, perhaps, they may be the nexus of where someone else's computation begins, but that's immaterial to the argument.

Or to put it more simply, this being a nice warm day, if I ask Jeeves to bring me a gin and tonic, there is no room for interpretation as to the nature of the computation that occurred in Jeeves' mind when he arrives with the refreshing beverage, and this is true whether Jeeves is a human or a robot.
  #518  
Old 07-04-2019, 02:19 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
That is, the representational content is taken as primitive---the mental symbols have a certain representational content, and cognitive science merely engages in the business of discovering how that content is manipulated, by virtue of the syntactic manipulations of the symbols themselves, under the assumption that these manipulations are computational in nature.

I have no problems with this perspective. But it falls short of the more grandiose claim that computation explains everything that goes on in the brain: the process by virtue of which mental symbols acquire their representational content is simply left unanalyzed. This strikes me as a methodologically valid move; indeed, possibly the best one available at present. But ideas according to which computation is all that goes on to produce the mind are simply left dangling, without firm footing.
I have a terrible fear that we may be converging on some kind of agreement here!

Still, I have to point out that the idea that cognition is a process that operates on mental symbols with representational content (that is, that the symbols map to semantic concepts) and performs syntactic operations on these representations that are at their core computational, is a very profound and important insight that is at the heart of most of cognitive science today.

But "the gradiose claim that computation explains everything that goes on in the brain" is in fact not a claim that anyone has ever made. I thought I was pretty clear on that from the beginning. To quote Fodor again: "There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition. But it hadn’t occurred to me that anyone could suppose that it’s a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works".
  #519  
Old 07-04-2019, 11:47 PM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by wolfpup View Post
That may be true, but that's not what the current conversation has come to be about, though. According to HMHW's argument, since every computation requires an interpreter (otherwise he claims, along with Searle et al, that it's only a trivial syntactic operation on symbols), one cannot speak of any cognitive process as being computational at all, which flies in the face of fifty years of cognitive science research and some of its most important foundational theories.
I really don't know what to do any more; it's like I'm shouting into the wind. I've been telling you for over 400 posts now that this isn't what I'm saying.
Quote:
Originally Posted by Half Man Half Wit View Post
Also, none of this threatens the possibility or utility of computational modeling.
I'm saying that there is a specific capacity of minds, the interpretation of symbols---intentionality---which isn't explained by computation. This is a point acknowledged by the Encyclopedia Britannica article, and it's the issue Pylyshyn explicitly tables. Everything else may well be computational.

Thus, since there is an aspect of mind that isn't computational, computation can't explain the mind---completely. But that doesn't mean that computation has no explanatory utility.

As for this:

Quote:
Originally Posted by wolfpup View Post
But "the gradiose claim that computation explains everything that goes on in the brain" is in fact not a claim that anyone has ever made.
The idea is the very foundation of this thread. If what I'm saying is right, then there's no meaning to downloading one's consciousness---if, as the Britannica article puts it, 'the meaning or content of symbols used by ordinary computers is usually derived by stipulation from the intentional states of their programmers', then that digital copy will have no meaningful internal states of its own. It might shuffle around symbolic vehicles in the same way as my brain did, but they have no more reference than just inert marks on paper.

Quote:
Originally Posted by wolfpup View Post
We sure did. And that appears to be a recurring theme here, first with the Britannica article, and then the Pylyshyn paper. Correct me if I'm not representing this fairly, but from my perspective, you cherry-pick a quote that cites the difficulties of the computationalist argument, and somehow conclude from this that the article says the opposite of what it actually says.
Well, from my perspective, you gloss over the problematic bits---which everybody is careful to point out are problematic---to insinuate that the problems I have pointed to are, in fact, long solved, irrelevant, or what have you. But the fact is, they're not; and while opinion may differ on whether they can be, that they are is simply not supported by the current state of the field.

Quote:
The Britannica article, for instance, is plainly a discussion of CTM, and specifically the version proposed by Fodor, and its importance in cognitive science. As such, it naturally cites the difficulties with that position, which you take to be insurmountable, and so conclude that it sides with your position.
The Britannica article sides with my position in that it doesn't claim that syntactic manipulations suffice to pin down semantic meaning, which is something you're trying to say it does. The same goes, incidentally, for the Pylyshyn paper: he says (correctly) that all a computer can react to are distinctions within the semantic content, which must be mapped to syntactic distinctions; i. e. that symbols for 'dog' and 'cat' must differ, both in form, and in how they're manipulated. But this doesn't fix that they mean dog and cat; that dimension is simply irrelevant to the level of syntactic manipulation.

The computation that's being performed, however, is only fully individuated by specifying this dimension (see the paper by Shagrir I cited).

Quote:
That an observer is required in order to imbue symbols with the appropriate semantics? Nonsense, he says; all that is required is that the formal features of the syntax exactly map, in a one-to-one correspondence, with the characteristics of some represented domain. That computational states need an observer to fix a unique interpretation? Again, no, one merely accepts that computations can have multiple interpretations, which doesn't matter as long as the above conditions are met.
It doesn't matter for the level of what a computational system does with those states, no; but it does matter for individuating computations, and producing mental states. Pylyshyn essentially defers dealing with that question, and makes, to my reading, the same move Shagrir proposes---to accept that the symbols in the mind have some definite interpretation, which yields a uniquely specified computation, without considering how this interpretation comes about.

Quote:
My version of this has been to say right from the beginning of this argument that it doesn't matter because all such interpretations are computationally equivalent.
Again, it doesn't matter for the symbol-manipulating level---but that's the problem, rather than the solution, for it obviously matters for mental states, which are not open to interpretation. When I compute square roots, or sums, I do that, and only that; and for that, the content of my mental representations must be definite. But that's explicitly left open.

Quote:
But in those terms, if such a program existed, I would be perfectly happy to use it both to play chess and to make a fortune on the stock market. The two functions would truly be equally valid, due to the incredibly improbable happenstance of the program's formal syntactical operands having a valid mapping to two completely different problem domains.
(Note that I didn't say that the program could be seen to make stock market predictions, but rather, that one can use the machine's performance to exclude that hypothesis---the stock market and a game of chess don't really stand in the same relation---one of structural equivalence---as my distinct functions. But I am saying that there is a huge number of inequivalent computations you can interpret a chess computer as performing, which can be produced from its state diagram in just the way as I have shown.)

But more to the point, if computations with such double meaning exist, then we run right into the problem that they seem to be very different from minds: our thoughts, beliefs, and desires are not open to further interpretation; they're perfectly definite. If I want a beer, I want a beer, and not any of an equivalence class of objects bearing the same relations to a set of other functional states of my mind.

Quote:
Your demand for an example of a computation that uniquely computes f is not even meaningful, because it demands a computation that operates on semantics rather than syntax, which is counterfactual to what a Turing-equivalent computation fundamentally is. The real challenge is to show that this is actually an obstacle to CTM, and there is certainly evidence (repeatedly cited, again just above) that it is not.
Again, the obstacle is in the fact that I can mentally instantiate f, i. e. possess mental representations with the unique representants being the elements of f, which relate to one another as those do. If computationalism only gives me the latter, then it fails to explain how I do that.

Quote:
That you might choose to interpret these lights in particular different ways is irrelevant because the lights are the end product of the totality of what the box does. It's like arguing that the human mind has exactly the same problem because I can manipulate the beads on an abacus, and the positions of the beads can have arbitrary multiple interpretations.

[...]

Or to put it more simply, this being a nice warm day, if I ask Jeeves to bring me a gin and tonic, there is no room for interpretation as to the nature of the computation that occurred in Jeeves' mind when he arrives with the refreshing beverage, and this is true whether Jeeves is a human or a robot.
I wonder if you honestly can't see that these are in contradiction to one another. If, as you claim, behavior individuates computation, then what computation is being done using the abacus is just the shuffling about of beads; if, on the other hand, I use the abacus to compute something, then my mere shuffling around of beads (i. e. my behavior) does not suffice to pin down the computation I am performing, and neither does Jeeves' shuffling around the halls of your mansion.

Or, in other words, if I write down the symbols '23 + 5 = 28', then my computation is not exhausted by the production of these symbols, but rather, by operating on the numbers they represent. It's not the symbols that are being computed, but their meanings---that's, after all, why we do computations: we want to know what the sum of the numbers 23 and 5 is, not what numerals are output in response to the string of symbols 23 + 5.
  #520  
Old 07-05-2019, 07:40 AM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Without revisiting yet again the rest of this argumentative quagmire, I just want to highlight a few things that you appear to have misunderstood or otherwise misstated.
Quote:
Originally Posted by Half Man Half Wit View Post
I really don't know what to do any more; it's like I'm shouting into the wind. I've been telling you for over 400 posts now that this isn't what I'm saying.

I'm saying that there is a specific capacity of minds, the interpretation of symbols---intentionality---which isn't explained by computation. This is a point acknowledged by the Encyclopedia Britannica article, and it's the issue Pylyshyn explicitly tables. Everything else may well be computational.

Thus, since there is an aspect of mind that isn't computational, computation can't explain the mind---completely. But that doesn't mean that computation has no explanatory utility.
But intentionality is absolutely at the core of what cognition fundamentally is! Fodor summed it up neatly in a single sentence (bolding mine): "There are facts about the mind that [computational theory] accounts for and that we would be utterly at a loss to explain without it; and its central idea -- that intentional processes are syntactic operations defined on mental representations -- is strikingly elegant".

So it doesn't matter if you acknowledge that "everything else" about the brain may be computational. It seems to me that if you claim "that there is a specific capacity of minds, the interpretation of symbols---intentionality---which isn't explained by computation" then it follows that no aspect of cognition can be explained by computation, which is precisely how I characterized your argument. And indeed you've been arguing against CTM throughout this thread for just that reason, such as here:
I've been presenting a widespread doubt about the computational theory of mind
Quote:
Originally Posted by Half Man Half Wit View Post
The idea is the very foundation of this thread. If what I'm saying is right, then there's no meaning to downloading one's consciousness---if, as the Britannica article puts it, 'the meaning or content of symbols used by ordinary computers is usually derived by stipulation from the intentional states of their programmers', then that digital copy will have no meaningful internal states of its own. It might shuffle around symbolic vehicles in the same way as my brain did, but they have no more reference than just inert marks on paper.
Yes, that was the original idea of the thread, but then it segued into a broader discussion of CTM, and that's what I'm defending. My views on uploading the mind or creating a digital consciousness are purely speculative and it's not something that I or anyone can factually defend.

Quote:
Originally Posted by Half Man Half Wit View Post
Well, from my perspective, you gloss over the problematic bits---which everybody is careful to point out are problematic---to insinuate that the problems I have pointed to are, in fact, long solved, irrelevant, or what have you. But the fact is, they're not; and while opinion may differ on whether they can be, that they are is simply not supported by the current state of the field.
I've never claimed that the problem relating to the semantics of mental representations is "long solved", "irrelevant", or anything else of that sort. What I've said is that it was recognized as an issue long ago, so this is not a novel argument or a surprise to anyone, but it has not generally been seen as an obstacle to the development of robust and well established computational theories of cognition.
  #521  
Old 07-05-2019, 10:47 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by wolfpup View Post
But intentionality is absolutely at the core of what cognition fundamentally is! Fodor summed it up neatly in a single sentence (bolding mine): "There are facts about the mind that [computational theory] accounts for and that we would be utterly at a loss to explain without it; and its central idea -- that intentional processes are syntactic operations defined on mental representations -- is strikingly elegant".
That's compatible with computationalism not giving an account of how mental representation comes about, though---you take it that there are mental representations (whatever, exactly, those are), and that they're manipulated via computation. That's the perspective I take Shagrir to take, and he applies it to a case study of the computational explanation of vision---which is something that seems to genuinely produce novel insight, and which is just the kind of application of computation to cognition I have absolutely no problem with.

Think about the notion of mass: it's absolutely fundamental to Newtonian physics, but that theory itself gives no account of it; it's taken as a primitive property of matter. That doesn't mean that the theory is useless---even without giving an account of what mass is, it is greatly illuminating on the subject of how mass behaves. The same can be true of cognitive science: without giving an account of what mental representations are, it can greatly illuminate how they are manipulated to produce the workings of our minds.

Quote:
Yes, that was the original idea of the thread, but then it segued into a broader discussion of CTM, and that's what I'm defending.
I have argued, and still am arguing, one thing only: there's at least one capacity of minds that isn't explained by computation, and that's intentionality (I think the same is true of phenomenal experience---and I think the two are somewhat interwoven---but that's another matter). I don't believe that this marks the downfall of computational modeling; but it does mean that eventually, we'll have to go beyond the notion of computation to explain the mind.

Quote:
I've never claimed that the problem relating to the semantics of mental representations is "long solved", "irrelevant", or anything else of that sort. What I've said is that it was recognized as an issue long ago, so this is not a novel argument or a surprise to anyone, but it has not generally been seen as an obstacle to the development of robust and well established computational theories of cognition.
Then whatever do you mean when you say things like this:
Quote:
Originally Posted by wolfpup View Post
Another way of saying this is that your f versus f' challenge is interesting but irrelevant, as I have tried to point out I don't know how many times now.
The question of whether the box instantiates f or f' is exactly the problem of the semantics of the symbols it uses---which, on computationalism, is the semantics of mental symbols.
  #522  
Old 07-05-2019, 02:29 PM
GIGObuster's Avatar
GIGObuster is online now
Charter Member
 
Join Date: Jul 2001
Location: Arizona
Posts: 29,229
Quote:
Originally Posted by Half Man Half Wit View Post
The question of whether the box instantiates f or f' is exactly the problem of the semantics of the symbols it uses---which, on computationalism, is the semantics of mental symbols.
There you go again eating your cake box and then still have it too.
  #523  
Old 07-05-2019, 03:03 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
That's compatible with computationalism not giving an account of how mental representation comes about, though---you take it that there are mental representations (whatever, exactly, those are), and that they're manipulated via computation. That's the perspective I take Shagrir to take, and he applies it to a case study of the computational explanation of vision---which is something that seems to genuinely produce novel insight, and which is just the kind of application of computation to cognition I have absolutely no problem with.
Wait a sec -- reality check here! You've repeatedly told us that the widespread acceptance of CTM is irrelevant, that Fodor was wrong, that widely accepted theories have been wrong before, and that CTM in fact amounts to being just like the caloric theory of heat (I'm surprised you didn't compare CTM to phlogiston and alchemy!). Just a few snippets that I had the patience to look up -- it's particularly instructive to go back to your claims in some of the earliest posts:

Putnam long since dismantled CTM, and the rest of the world is just slow to catch up

Computational theories of mind imply infinite regress

that the utility of CTM is merely as a kind of model

Equating CTM with the archaic and discredited theory of caloric and that I got myself "all in a huff about [your] disagreement with Fodor", with whom you now apparently agree after all.

And even more recently, "I've been presenting a widespread doubt about the computational theory of mind"

Now all of a sudden you're telling us that it's a wonderfully useful theory with a great deal of explanatory power! Of course I understand the point that a theory can be useful and provide great insights even if it's in some respects incomplete (fails to account for certain primitive properties), but surely you can see that it's hard to avoid the conclusion that there's a great deal of backtracking going on here relative to what you were saying before.

It also raises the serious question of just exactly what you now think CTM is. Is it just a useful model of something that can't really exist in reality? Or does it describe a literal reality? Because if the latter, then no amount of equivocation can avoid the conclusion that your argument that the processes operating on mental representations have to be non-computational -- because said representations possess instrinsic semantic properties -- have to be discarded as simply wrong because they're incompatible with that view. And we find, in fact, that many if not most proponents of CTM endorse that latter view: In Computation and Cognition, Pylyshyn argues that computation must not be viewed as just a convenient metaphor for mental activity, but as a literal empirical hypothesis.

Quote:
Originally Posted by Half Man Half Wit View Post
I have argued, and still am arguing, one thing only: there's at least one capacity of minds that isn't explained by computation, and that's intentionality (I think the same is true of phenomenal experience---and I think the two are somewhat interwoven---but that's another matter). I don't believe that this marks the downfall of computational modeling; but it does mean that eventually, we'll have to go beyond the notion of computation to explain the mind.
I was just explaining how the argument has shifted from the currently unanswerable question of uploading the entirety of the human mind to an argument about the CTM, and that the latter is what I was defending. I make no claim that the latter in any way implies the former, mainly on the grounds that CTM is manifestly incomplete.

Quote:
Originally Posted by Half Man Half Wit View Post
Then whatever do you mean when you say things like this:

The question of whether the box instantiates f or f' is exactly the problem of the semantics of the symbols it uses---which, on computationalism, is the semantics of mental symbols.
I don't regard it as the same problem. In the seminal paper I cited on CTM, later expanded into a book on the subject, Pylyshyn freely acknowledges that the same computational states can represent multiple different interpretations in just this way, while still promoting a strong version of CTM. The difficulties lie in explaining the intrinsic semantics of mental representations in the mind and linking them to physical processes. Most cognitive scientists who support CTM would reject your claim that your simplistic example in any way proves that such processes cannot possibly be computational.
  #524  
Old 07-05-2019, 08:17 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by wolfpup View Post
You've repeatedly told us that the widespread acceptance of CTM is irrelevant, that Fodor was wrong,...
Did you know that Fodor himself thought that the cognitive mind was not computational, just the modules that performed specific functions are computational but not the higher level mind that integrates the results of the various modules and makes decisions/guides behavior?

From this paper (written by a computationalist) https://schneiderwebsite.com/uploads...ought_ch.1.pdf:
"Jerry Fodor, the main philosophical advocate of LOT and the related computational theory of mind (CTM), claims that while LOT is correct, the cognitive mind is likely noncomputational (2000, 2008)"


From this page https://www.iep.utm.edu/fodor/#H7:
Quote:
[A] cognitive science that provides some insight into the part of the mind that isn’t modular may well have to be different, root and branch, from the kind of syntactical account that Turing’s insights inspired. It is, to return to Chomsky’s way of talking, a mystery, not just a problem, how mental processes could be simultaneously feasible and abductive and mechanical. Indeed, I think that, as things now stand, this and consciousness look to be the ultimate mysteries about the mind. (2000, p. 99).

Last edited by RaftPeople; 07-05-2019 at 08:18 PM.
  #525  
Old 07-05-2019, 08:43 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by RaftPeople View Post
Did you know that Fodor himself thought that the cognitive mind was not computational, just the modules that performed specific functions are computational but not the higher level mind that integrates the results of the various modules and makes decisions/guides behavior?

From this paper (written by a computationalist) https://schneiderwebsite.com/uploads...ought_ch.1.pdf:
"Jerry Fodor, the main philosophical advocate of LOT and the related computational theory of mind (CTM), claims that while LOT is correct, the cognitive mind is likely noncomputational (2000, 2008)"


From this page https://www.iep.utm.edu/fodor/#H7:
Fodor's argument of modularity precisely parallels Pylyshyn's argument of "cognitive impenetrability" wherein things like the the Muller-Lyer illusion persist even when it's intellectually known that the lines are of identical length. The converse is also true: that the illusion does not exist in mental images. Both phenomena are actually supportive of CTM.
  #526  
Old 07-05-2019, 10:38 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by wolfpup View Post
Fodor's argument of modularity precisely parallels Pylyshyn's argument of "cognitive impenetrability" wherein things like the the Muller-Lyer illusion persist even when it's intellectually known that the lines are of identical length. The converse is also true: that the illusion does not exist in mental images. Both phenomena are actually supportive of CTM.
That didn't answer the question that I asked.

Fodor thinks that human reasoning is NOT computational due to things like our ability to perform abductive reasoning.

More quotes about his views:
http://citeseerx.ist.psu.edu/viewdoc...=rep1&type=pdf
"Slightly more precisely, he maintains that there is a fundamental tension between the local, syntactically determined character of classical computation and the global character of much human reasoning, especially abductive inference and planning."


https://pdfs.semanticscholar.org/e94...d288f3268e.pdf
"In The mind doesn’t work that way, Jerry Fodor argues that CTM has problems explaining abductive or global inference, but that the New Synthesis offers no solution, since massive modularity is in fact incompatible with global cognitive processes."


My question to you:
Were you were aware that Fodor held these views?
Do you think that Fodor was right that global inference can't be computational?
Or do you think that Fodor was wrong?
  #527  
Old 07-06-2019, 04:32 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
I have to say that these last few exchanges have left me a little confused on what, exactly, your position is. For instance, you claim:

Quote:
Originally Posted by wolfpup View Post
Your demand for an example of a computation that uniquely computes f is not even meaningful, because it demands a computation that operates on semantics rather than syntax, which is counterfactual to what a Turing-equivalent computation fundamentally is.
And further:

Quote:
Originally Posted by wolfpup View Post
But intentionality is absolutely at the core of what cognition fundamentally is!

[...]

So it doesn't matter if you acknowledge that "everything else" about the brain may be computational. It seems to me that if you claim "that there is a specific capacity of minds, the interpretation of symbols---intentionality---which isn't explained by computation" then it follows that no aspect of cognition can be explained by computation
But also:

Quote:
Originally Posted by wolfpup View Post
Good, and in fact computationalism is not just an explanation for cognition, it's pretty much the only one we have, and as I keep saying, constitutes a major foundation for cognitive science.
These seem to be in flagrant contradiction to me. You seem to make the following three claims:
  1. Turing-machine equivalent computation does not operate on the semantic level.
  2. The semantic level is crucial to any serious theory of cognition---without it, such a theory doesn't even get off the ground.
  3. Computationalism is a perfectly fine explanation of cognition, and in fact, the only one currently worth taking seriously.

How do you reconcile these?

Quote:
Originally Posted by wolfpup View Post
Wait a sec -- reality check here! You've repeatedly told us that the widespread acceptance of CTM is irrelevant, that Fodor was wrong, that widely accepted theories have been wrong before, and that CTM in fact amounts to being just like the caloric theory of heat (I'm surprised you didn't compare CTM to phlogiston and alchemy!).
Again, I fail to see the problem. I stand by all I said, but, as I have been at pains to point out, I also don't think that this hampers the utility of computational modeling in any way. I'll try to be clear about this for one last time:
  1. Yes, the CTM is wrong---like Newtonian mechanics. In particular, the claim of computational sufficiency, which, as Chalmers puts it, "holds that the right kind of computational structure suffices for the possession of a mind" (i. e. the position you claim nobody has ever seriously held), is false, as intentionality is a faculty minds have that can't be performed computationally.
  2. Yes, the CTM is useful---like Newtonian mechanics. Within its domain of applicability, it provides genuine insight, and may even be indispensable (again, like Newtonian mechanics).

There is no contradiction whatsoever between the two.

Quote:
It also raises the serious question of just exactly what you now think CTM is. Is it just a useful model of something that can't really exist in reality? Or does it describe a literal reality? Because if the latter, then no amount of equivocation can avoid the conclusion that your argument that the processes operating on mental representations have to be non-computational -- because said representations possess instrinsic semantic properties -- have to be discarded as simply wrong because they're incompatible with that view.
This doesn't follow. The processes that operate on mental representations can be fully computational, thus making the brain a computer in this sense, while whatever imbues these representation with content is not computational.

There are, evidently, computers whose symbols do not have any objective content. As Shagrir puts it:
Quote:
Originally Posted by Shagrir
digital electronic systems, e.g., desktops, the paradigm cases of computing systems, operate on symbols whose content is, indisputably, observer-dependent. That the states of Deep Junior represent possible states of chessboards is an interpretation we have ascribed to them; we could just as well have ascribed to them very different content.
The brain, however, then is a computer whose symbols do have an objective content:
Quote:
Our cognitive states are usually classified as representations of the former sort [whose content is observer-independent]: whether I believe that Bush is a good president is said to be independent of what others think or what they take me to believe.
Explaining the latter in terms of the former is hopeless; but leaving out the question of how representational content arises, we can consider the manipulation of these symbols, and, if those manipulations respect certain properties of the symbols, we can explain how computational manipulations yield transformations of mental content---i. e., cognition. This neither needs to nor aims to explain how that content arises.

Think about it as the distinction between the soundness and the validity of an argument. Take an argument of the following form:
  1. All As are B.
  2. X is A.
  3. --> X is B.

It is valid by virtue of its syntactical structure; that is, its validity is independent of what the symbols A, B, and X mean. It also means that in carrying out the argument, we learn something about X---we have concluded something. Moreover, we can study what makes arguments valid, without having any notion of what the symbols used mean, nor, how they acquire this meaning. This is a completely worthwhile field of study, a large part of the science of logic.

Yet, in order to decide whether an argument says something true---whether it is sound---we need to know what the symbols mean. Only if the premises are true, is its conclusion guaranteed to be likewise. So, if 'A' is 'swan', 'B' is 'white', and 'X' is 'Socrates', then, we have not made a sound argument---there are black swans. But, on the other hand, if 'A' is 'humans', 'B' is 'mortal', and 'X' is 'Socrates', then the argument is perfectly sound---and we learn something about Socrates.

Computation and cognition may then stand in the same relationship. Computationalism tells us how the content of our minds is manipulated, including e. g. what sort of conclusions we draw from prior knowledge, without, however, telling us how the symbols that are being manipulated acquire their content.

This is the view that's largely presupposed in the semantic view of computation (more or less explicitly). As Piccinini puts it:
Quote:
Originally Posted by Piccinini
The received view is that ‘‘[t]here is no computation without representation’’ (Fodor 1981, p. 180). The reason usually given is that computational states are individuated, or taxonomized, by their semantic properties. The same point is sometimes made by saying that computational states have their content essentially.
That is, that a computational state is representational is taken as an essential---irreducible and not further analyzable---property of that state.

Piccinini is explicit about the consequent circularity of trying to 'naturalize' mental content in computational terms on the semantic view:
Quote:
One problem with naturalistic theories of content that appeal to computational properties of mechanisms is that, when conjoined with the semantic view of computational individuation, they become circular. For such theories explain content (at least in part) in terms of computation, and according to the semantic view, computational states are individuated (at least in part) by contents.
Quote:
Originally Posted by wolfpup View Post
I don't regard it as the same problem. In the seminal paper I cited on CTM, later expanded into a book on the subject, Pylyshyn freely acknowledges that the same computational states can represent multiple different interpretations in just this way, while still promoting a strong version of CTM.
They're exactly the same problem---how symbolic vehicles become associated with their semantic content.

The view of the computational theory Pylyshyn takes in that article is compatible with the view as outlined above (which, at least these days, seems to explicitly be regarded as the 'received view')---that the way in which the symbolic vehicles acquire their content is left open, but that their manipulation is done via computations, in order for which the syntactic properties must mirror some structure of the semantic properties---as in the case of logical arguments.

Quote:
Most cognitive scientists who support CTM would reject your claim that your simplistic example in any way proves that such processes cannot possibly be computational.
I hope it's clear now that this isn't the case. If not, Shagrir uses a much simpler example to make the point, that of a 'brown-cow' neuron, which spikes if its input neurons (a 'brown'-neuron and a 'cow'-neuron) both spike:

Quote:
Originally Posted by Shagrir
To better understand how content constrains computational identity, consider again the brown–cow cell. Assume that receiving/emitting 0–50 mV can be further analyzed: the cell emits 50–100 mV when it receives over 50 mV from each input channel, but it turns out that it emits 0–25 mV when it receives under 25 mV from each input channel, and 25–50 mV otherwise. Now assign “1” to receiving/emitting 25–100 mV and “0” to receiving/emitting 0–25 mV. Under this assignment the cell is an OR-gate. This means that the brown–cow cell simultaneously implements, at the very same time, and by means of the very same electrical activity, two different formal structures. One structure is given by the AND-gate and another by the OR-gate.

Now, each of these abstract structures is, potentially, computational. But only the AND-gate is the computational structure of the system with respect to its being a brown-cow cell, namely, performing the semantic task of converting information about brown things and cow things into information about brown cow things. What determines this, that is, picks out this AND-structure as the system’s computational structure, given that OR-structure abstract from the very same discharge? I have suggested that it is content that makes the difference: discharges that are greater than 50mV correspond to certain types of content (of cows, browns, and brown cows) and the discharges that are less than 50 mV corresponds to another (their absence). Thus the identity conditions of the process, when conceived as computational, are determined, at least partly, by the content of the states over which it is defined.
Consequently, whether the cell is an AND- or OR-gate is decided by the representational content of its symbolic vehicles (voltages). Likewise, whether my box implements f or f' is determined by the semantic content of lamp-lights and switch-states. If these vehicles should somehow objectively carry semantic content ('essentially'), then there would be no question regarding which function is being computed.

Last edited by Half Man Half Wit; 07-06-2019 at 04:35 AM.
  #528  
Old 07-06-2019, 06:04 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Nevermind.

Last edited by Half Man Half Wit; 07-06-2019 at 06:06 AM.
  #529  
Old 07-06-2019, 11:19 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
HMHW, as I read about this topic, I see things that seem like simple errors, but I know these are smart people so it's probably not as simple as it appears, can you shed some light?

1 - Computation vs Neural Networks
Given that ANN's can be computed on the computers we have today, why do people seem to make a distinction between computation and neural networks, as if ANN's are not considered to be computational?

2 - Fodor's problems with computationalism (abduction/global reasoning)
His position must be based on an alternate definition of computation (or something) because, although I can follow his argument as presented (no wind belief requires context), it seems pretty easy to engineer a system that works around the problem using today's computers.

Am I missing something?
  #530  
Old 07-06-2019, 11:56 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
HMHW, question related to the argument you've presented (not arguments related to qualia and experience, but related to the ability to assign meaning to symbols to perform computations):
Let's pretend we ignore all the counter arguments and set out to build our general AI using today's style of computers (but with more power).

If the argument you've presented is correct, in what way will our effort fail?
1 - Externally it may appear to work perfectly but internally it will be lacking something?
2 - Externally it will never be able to mimic the capabilities of a human?
3 - Externally it will be able to mimic the capabilities of a human, but the cost and energy requirements will be astronomical?
  #531  
Old 07-06-2019, 11:58 AM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
These seem to be in flagrant contradiction to me. You seem to make the following three claims:
  1. Turing-machine equivalent computation does not operate on the semantic level.
  2. The semantic level is crucial to any serious theory of cognition---without it, such a theory doesn't even get off the ground.
  3. Computationalism is a perfectly fine explanation of cognition, and in fact, the only one currently worth taking seriously.

How do you reconcile these?
First of all, the second part of (b) is wrong. You yourself made the point that one can develop a comprehensive computational theory while deferring a full account of the semantics of mental representations. And if you're going to claim you didn't, I'll explicitly make that point now.

Secondly, your attempt to imply that the first part of (b) constitutes an irresolvable dilemma is also wrong, as efforts to understand this problem have been part of the evolving history of cognitive science for decades. Fodor's important book, Psychosemantics: The Problem of Meaning in the Philosophy of Mind, is a good example of that progress. To quote from a review of it, "... it not only defends our "commonsense" psychological practice of ascribing content or meaning to mental states (i.e., our assuming that they represent or are about objects other than themselves), but also provides the beginnings of a causal account of how such intentional states are even possible ...".

So there's nothing to reconcile once one corrects your mistaken statement in the second part of (b), and indeed the Britannica article I cited acknowledges all three points without seeing any apparent contradiction. I would also note that all three points are orthodox in most formulations of CTM including Fodor's Representational Theory of Mind, that point (c) is practically a verbatim quote from Fodor's most recent book, and that Fodor himself was widely regarded as the most important philosopher of mind of the late 20th and early 21st centuries. So if you think those three points together are some kind of "gotcha", you need to reexamine your premises.

Quote:
Originally Posted by Half Man Half Wit View Post
Again, I fail to see the problem. I stand by all I said, but, as I have been at pains to point out, I also don't think that this hampers the utility of computational modeling in any way. I'll try to be clear about this for one last time:
  1. Yes, the CTM is wrong---like Newtonian mechanics. In particular, the claim of computational sufficiency, which, as Chalmers puts it, "holds that the right kind of computational structure suffices for the possession of a mind" (i. e. the position you claim nobody has ever seriously held), is false, as intentionality is a faculty minds have that can't be performed computationally.
  2. Yes, the CTM is useful---like Newtonian mechanics. Within its domain of applicability, it provides genuine insight, and may even be indispensable (again, like Newtonian mechanics).

There is no contradiction whatsoever between the two.
Again, a significant point of correction is in order. There is certainly a contradiction between your view that CTM is "wrong" but can still be a useful model, and the explicit statement I cited earlier that it's not just a model but intended to be a literal description of cognition.

But again, to avoid misunderstanding, no one claims that CTM alone is a complete description of everything about the mind. Its central premise is that most cognitive processes are computational in every meaningful sense of the word, just as defined by Turing and classically in computer science.

I actually think your analogy is a good one, but not in the way you intended. In order for the analogy to accurately reflect the kind of claim you're making, there would have to have been a fundamental theoretical flaw in Newtonian theory observable right from the start, such that everyone knew the theory was wrong but used it anyway because they had nothing better. But in fact classical mechanics had wide and incontrovertible empirical support and as such can be regarded as not just a useful model but as empirically correct, and continues to be used to this day. This is so because the refinements introduced by theories of relativity and quantum mechanics are not relevant to common everyday experience, and because classical mechanics contains fundamental truths like Newton's three laws of motion. And so it is with CTM, and I believe always will be, even as it gets refined.

Since we're obviously never going to agree on any of this, and have each said about all that can usefully be said, I suggest we end this now. I do thank you for the large amounts of time you put into this, and I do understand your point; I just don't see it as an obstacle to CTM. I think the Britannica article's statement that "no remotely adequate proposal has yet been made" for bridging the gap between the syntax of computational symbols and the intentionality of mental representations might be a bit pessimistic; as the article itself notes, progress is being made on a number of different research fronts. A resolution to this problem would render moot your criticism, and that of Chalmers, Searle, Dreyfus, and other skeptics, who typically reject not only CTM but the whole notion of "real" computational intelligence, which I find to be a rather sadly pessimistic outlook. Fortunately we've already seen that Searle and Dreyfus and their ilk have been wrong about a lot of this.
  #532  
Old 07-06-2019, 12:48 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by RaftPeople View Post
My question to you:
Were you were aware that Fodor held these views?
Do you think that Fodor was right that global inference can't be computational?
Or do you think that Fodor was wrong?
Any response to this wolfpup?

If Fodor is correct then global reasoning/abduction is non-computational, and that is arguably the most important aspect of human intelligence.
  #533  
Old 07-06-2019, 12:58 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by RaftPeople View Post
My question to you:
Were you were aware that Fodor held these views?
Do you think that Fodor was right that global inference can't be computational?
Or do you think that Fodor was wrong?
Your point might have some merit against an argument that everything about the mind can be described computationally, but no one here has made that argument. As to Fodor's views, the quotes I cited here, and others that I cited in previous conversations years ago, made it abundantly clear that not only did he not believe computational theory could provide a complete account of the mind, he didn't believe it could provide (direct quote) "more than a fragment of a full and satisfactory cognitive psychology", either. So the things you cite are in no way inconsistent with the argument I'm making about the central role of CTM in explaining cognition as part of an overall theory of mind.
  #534  
Old 07-06-2019, 01:48 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by wolfpup View Post
Your point might have some merit against an argument that everything about the mind can be described computationally, but no one here has made that argument. As to Fodor's views, the quotes I cited here, and others that I cited in previous conversations years ago, made it abundantly clear that not only did he not believe computational theory could provide a complete account of the mind, he didn't believe it could provide (direct quote) "more than a fragment of a full and satisfactory cognitive psychology", either. So the things you cite are in no way inconsistent with the argument I'm making about the central role of CTM in explaining cognition as part of an overall theory of mind.
So either Watson doesn't and won't ever perform global reasoning because Watson runs on a computer and global reasoning is non-computational, or Fodor was wrong about that point.

Which do you believe?
  #535  
Old 07-06-2019, 02:11 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
I believe that you should stop trying to create incoherent "gotcha"s that have no relevance to the discussion.
  #536  
Old 07-06-2019, 06:10 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by wolfpup View Post
I believe that you should stop trying to create incoherent "gotcha"s that have no relevance to the discussion.
You think that Fodor's position regarding global reasoning not being computational is not relevant to the discussion?

It seems like a pretty reasonable point and hardly a "gotcha" based on some trickery. Fodor flat out says that global reasoning is not computational. You have been posting in a way to make it seem like you think things like global reasoning IS computational, but it's not really clear if you believe that or not.

Any of these responses is not really a big deal, not sure why you are so reluctant to be pinned down:
1 - I think Fodor was right, global reasoning can't be computational for the reasons he states
2 - I think Fodor was right on other points but I disagree that global reasoning can't be computqtional, I think he was wrong on that point
3 - I'm not sure, I've never really read his arguments about why he thinks global reasoning can't be computational


If you choose position #1 then I would argue why I think that position is wrong (I believe it's a simple engineering problem to work around his global context issue).

If you choose position #2 then I would agree with you that Fodor was wrong about his global reasoning argument.
  #537  
Old 07-07-2019, 04:59 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by RaftPeople View Post
HMHW, as I read about this topic, I see things that seem like simple errors, but I know these are smart people so it's probably not as simple as it appears, can you shed some light?

1 - Computation vs Neural Networks
Given that ANN's can be computed on the computers we have today, why do people seem to make a distinction between computation and neural networks, as if ANN's are not considered to be computational?
The issue here isn't one of what, but one of how. ANNs are certainly computational, but they're considered a different form of computation---the term most often applied is 'sub-symbolic', as opposed to the symbol-manipulating Turing machine kind of computation. Both approaches are known to be equivalent in power, but that doesn't automatically imply that they're equally well suited for giving rise to minds.

To illustrate, AI got its start with so-called 'expert systems'---essentially, long lists of 'if-then-else' statements (this often comes under the header 'good, old-fashioned AI' or GOFAI these days). In principle, you can rewrite every program in such terms, at least approximately. Yet, nevertheless, nobody thinks these days that it's a good approach to AI anymore.

Rather, it's largely been supplanted by machine learning techniques, such as deep neural nets and the like. A sizable contingent of philosophers have followed suit, and argue that their superior performance in this area is grounds for adopting them as a better explanatory model of the mind. This is a break with the computational theory as advocated by Fodor, since that is bundled up with the representational theory of mind, whereas in neural networks, you don't have any immediate notion of representation in that sense---i. e. no symbols being tokened under certain circumstances (hence 'sub-symbolic'). Furthermore, the rules according to which a neural network does its thing generally are implicit, and might be impractical to state explicitly---why an ANN categorized a certain image as that of a 'cat' is often largely opaque. Finally, the computation is distributed throughout the layers of the network, rather than, as in classical computationalism, modularised.

This issue, incidentally, has prompted the move towards 'explainable AI'---AI which has a model of its domain, and hence, can tell you that it's identified the thing on the picture as a cat by pointing to the presence of whiskers, a tail, four legs, and the like. DARPA calls this the 'third wave' of AI (the video is well worth watching).

There are those who believe that human-style thinking will require both approaches to integrate---and I think a good case can be made for that, by pointing to the dual process theory in psychology: in brief, there are two cognitive systems at work in the human brain, often called simply System 1 and System 2. System 1 is the sort of automatic, implicit, fast and generally non-conscious 'intuitive' assessment of situations and stimuli, whereas System 2 is deliberate, conscious, step-by-step reasoning towards a conclusion. So effectively, System 1 seems to work a lot like a neural network, whereas System 2 has an explicit modeling component.

Quote:
2 - Fodor's problems with computationalism (abduction/global reasoning)
His position must be based on an alternate definition of computation (or something) because, although I can follow his argument as presented (no wind belief requires context), it seems pretty easy to engineer a system that works around the problem using today's computers.
I'm less sure about this.

In brief, I think the issue is similar to the so-called frame problem: roughly, an AI may do well in an artificially limited environment, by simply having explicit rules about its elements (think, again, GOFAI). But the real world is not so limited: there is an infinite variety of things that an AI let loose might encounter. How to cope with this variety is the frame problem.

Now, a similar issue exists with the extent of background information a system capable of addressing even a relevant part of the world must have. In order for its decision-making process to remain tractable, it can only take into account a subset of that background knowledge at a given time; otherwise, the computation it needs to perform simply wouldn't be feasible. This then leads to the necessity of modularisation, with separated (encapsulated) cognitive systems processing separate parts of the problem (in parallel).

But the mind doesn't really seem to work that way. Rather, it seems to be integrated, able to freely switch between content that ought to be associated with different modules. So how does this integration work? And even if we get that integration to work, where does creativity come from? How does the combination of domain-specific knowledge result in something new, which might not even cleanly map onto any specific domain (as in, for instance, coming up with entirely new concepts both in science and fiction)?

And then, finally, how are these new contents evaluated? If we have a certain set of domain-specific modules that each 'care' about their specific area, then, even if we somehow integrate their contents, and even if we are capable of producing something new from then, what module could rise to the task to evaluate whether we've come up with something appropriate? The new content doesn't necessarily map onto the area of specialization of any of the given modules, so which one has the required capacities?

So it seems that we need modularization of the mind to make its capacities computationally tractable; but certain of its capacities seem ill-suited to a modular architecture. Fodor then claims that this is an issue that can't be resolved (at any rate, within the sort of computationalism he defends). Interestingly, it's been proposed that just the kind of hybrid systems I outlined above may be what's needed to get around this problem.

I don't really have any settled opinion on whether I consider this to be a real problem, or if so, if it's fatal to (classical) computationalism.

Quote:
Originally Posted by RaftPeople View Post
HMHW, question related to the argument you've presented (not arguments related to qualia and experience, but related to the ability to assign meaning to symbols to perform computations):
Let's pretend we ignore all the counter arguments and set out to build our general AI using today's style of computers (but with more power).

If the argument you've presented is correct, in what way will our effort fail?
1 - Externally it may appear to work perfectly but internally it will be lacking something?
2 - Externally it will never be able to mimic the capabilities of a human?
3 - Externally it will be able to mimic the capabilities of a human, but the cost and energy requirements will be astronomical?
My best guess is that if it's able to effectively mimic the performance of a human, it won't be doing so by means of computation. That doesn't exclude that anything instantiating the right sort of computation (under some interpretation) also instantiates the right sort of mental properties, merely that those don't reduce to the computation. That is, a hypothetical conscious robot will not be conscious via instantiating a certain computation, but via being a physical system with the right sort of structure.

Quote:
Originally Posted by wolfpup View Post
First of all, the second part of (b) is wrong. You yourself made the point that one can develop a comprehensive computational theory while deferring a full account of the semantics of mental representations.
Indeed, it was me who proposed that you could have a meaningful computational theory of cognition without the aim to account for semantics:
Quote:
Originally Posted by Half Man Half Wit View Post
I'm saying that there is a specific capacity of minds, the interpretation of symbols---intentionality---which isn't explained by computation. This is a point acknowledged by the Encyclopedia Britannica article, and it's the issue Pylyshyn explicitly tables. Everything else may well be computational.

Thus, since there is an aspect of mind that isn't computational, computation can't explain the mind---completely. But that doesn't mean that computation has no explanatory utility.
However, against that, you claimed the following:
Quote:
Originally Posted by wolfpup View Post
But intentionality is absolutely at the core of what cognition fundamentally is!

[...]

So it doesn't matter if you acknowledge that "everything else" about the brain may be computational. It seems to me that if you claim "that there is a specific capacity of minds, the interpretation of symbols---intentionality---which isn't explained by computation" then it follows that no aspect of cognition can be explained by computation, which is precisely how I characterized your argument.
Thus explicitly disavowing the notion that there could be a satisfying computational theory of cognition that doesn't give an account of how semantics arises.

It's this that threw me. You seem to simultaneously be claiming that computation inherently can give no account of semantics, that an account of semantics is absolutely essential to a satisfying theory of cognition, and yet, that computationalism yields a satisfying theory of cognition---which I still don't see how to reconcile.

Quote:
Secondly, your attempt to imply that the first part of (b) constitutes an irresolvable dilemma is also wrong, as efforts to understand this problem have been part of the evolving history of cognitive science for decades. Fodor's important book, Psychosemantics: The Problem of Meaning in the Philosophy of Mind, is a good example of that progress. To quote from a review of it, "... it not only defends our "commonsense" psychological practice of ascribing content or meaning to mental states (i.e., our assuming that they represent or are about objects other than themselves), but also provides the beginnings of a causal account of how such intentional states are even possible ...".
However, a causal account of intentionality isn't a computational account---causality being a physical notion, not a computational one. On such a theory, it's not, as you have variously claimed, the syntactical manipulation of symbols that provides them with meaning, but the additional notion of the tokening of these symbols being causally related to their semantic content. One might then appeal to such a theory---or any of a wide array of 'naturalizations' of mental content---in order to provide the meanings for representations that computation alone fails to issue.

Quote:
Again, a significant point of correction is in order. There is certainly a contradiction between your view that CTM is "wrong" but can still be a useful model, and the explicit statement I cited earlier that it's not just a model but intended to be a literal description of cognition.
It depends on what you mean by 'literal description'. For instance, on the sort of view that computation is operation on representational vehicles, and that, indeed, computations are only individuated with respect to the semantic content of their representational vehicles (see Shagrir's 'Brown Cow'-example), without any sort of commitment to how they acquire their representational content, it might be a propos to call the brain 'literally a computer'; but then, such a claim doesn't entail something like the thesis of computational sufficiency above.

The brain would then be a computer, but it would be a different sort of computer than the one I'm now typing on. As Shagrir puts it, those computers 'operate on symbols whose content is, indisputably, observer-dependent'. So we'd have two species of computation: one whose content is fixed (our brain), and one whose content is observer-dependent (every other computer).

I think that this is a terminologically inconvenient move. One can validly assume the position that what makes something a computer is merely how it handles the symbols it manipulates, in which case, one could argue that the brain does this handling in the same way as a desktop computer does, albeit using symbols that possess original, rather than derived, intentionality. But I think the issue here is really just one of terminology, and I think that the meaningful nature of mental symbols is enough of a difference to the interpretation-dependent symbols of ordinary computers to consider them different kinds.

Quote:
But again, to avoid misunderstanding, no one claims that CTM alone is a complete description of everything about the mind.
You've claimed the exact opposite before:
Quote:
Originally Posted by wolfpup View Post
Fodor and Chalmers and of course nearly everyone in cognitive science supports the computational account of cognition, though they differ in their approaches. But where Chalmers agrees with all the others is on the following two foundational issues -- both of which I assume you consider to be complete nonsense that I should just stop posting about:
  • Computational sufficiency, stating that the right kind of computational structure suffices for the possession of a mind, and for the possession of a wide variety of mental properties.
  • Computational explanation, stating that computation provides a general framework for the explanation of cognitive processes and of behavior.
According to this post, computationalists (all of them) agree on the thesis of computational sufficiency, which is exactly the thesis that computation suffices for mind.

And of course, the notion that the mind is wholly computational still is the basic issue of this thread, which is what I started out arguing against, and have continued to do.

Quote:
I actually think your analogy is a good one, but not in the way you intended. In order for the analogy to accurately reflect the kind of claim you're making, there would have to have been a fundamental theoretical flaw in Newtonian theory observable right from the start, such that everyone knew the theory was wrong but used it anyway because they had nothing better.
Actually, that was a widespread view on Newton's theory. The law of gravitation, in particular, postulated an action at a distance, without giving any account of how that action could be transmitted, something that greatly troubled Newton's contemporaries (notably Descartes). Newton, in his General Scholium, essentially acknowledged this problem, but refused to 'feign any hypotheses'---'hypotheses non fingo'. Furthermore, he excluded such hypotheses on methodological grounds, claiming that they have no place in 'experimental philosophy'.

So there is a central part of the theory, whose working isn't explained by the theory itself, and which still didn't lead to any problems with the use of the theory.

Of course, modern theories have since stepped in---General Relativity could do away with the action at a distance, and explained this primitive notion of Newtonian mechanics from more fundamental principles of how matter influences spacetime.

Quote:
A resolution to this problem would render moot your criticism, and that of Chalmers, Searle, Dreyfus, and other skeptics, who typically reject not only CTM but the whole notion of "real" computational intelligence, which I find to be a rather sadly pessimistic outlook.
It depends. A computational solution would alleviate the issue I see, but a solution that essentially depends on non-computational concepts would merely affirm it.

Last edited by Half Man Half Wit; 07-07-2019 at 05:00 AM.
  #538  
Old 07-07-2019, 08:45 AM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
Indeed, it was me who proposed that you could have a meaningful computational theory of cognition without the aim to account for semantics:


However, against that, you claimed the following:


Thus explicitly disavowing the notion that there could be a satisfying computational theory of cognition that doesn't give an account of how semantics arises.

It's this that threw me. You seem to simultaneously be claiming that computation inherently can give no account of semantics, that an account of semantics is absolutely essential to a satisfying theory of cognition, and yet, that computationalism yields a satisfying theory of cognition---which I still don't see how to reconcile.
Look, it's just a fact that computational theories of mind which hold that mental processes are syntactic operations on mental representations are well established and widely accepted in numerous pertinent fields, while the nature of these mental representations continues to be a work in progress, and that's the point I was making. The second part that you think is contradictory was just my interpretation of what I believed YOUR position to be, namely that any such computational theories are just models that use the computational paradigm as a metaphor, and that this couldn't possibly be how cognition really works -- and I subsequently cited numerous examples of your hostility to CTM.
Quote:
Originally Posted by Half Man Half Wit View Post
You've claimed the exact opposite before:

According to this post, computationalists (all of them) agree on the thesis of computational sufficiency, which is exactly the thesis that computation suffices for mind.
No, I've cited Fodor numerous times (in this thread, but also long prior to this thread) as clearly stating that CTM is very far from a complete description of the mind, and in fact far from a complete description of cognitive psychology. Chalmers' statement that "the right kind of computational structure suffices for the possession of a mind, and for the possession of a wide variety of mental properties" asserts that such a computational structure -- which we are far from adequately describing in any computational theories we have today -- is in no way inconsistent with a statement about the limitations of present theories. That said, while there's no contradiction there, I think Chalmers probably overstated the case; a more conservative statement would leave out the mention of mind and say that "the right kind of computational structure suffices for the possession of a wide variety of mental properties".
  #539  
Old 07-07-2019, 09:51 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by wolfpup View Post
No, I've cited Fodor numerous times (in this thread, but also long prior to this thread) as clearly stating that CTM is very far from a complete description of the mind, and in fact far from a complete description of cognitive psychology.
Which is the source of my confusion. You've variously claimed that computers don't deal in semantics[1], that Watson deals in semantics[2], and that computations deal in semantics once they become 'complex' enough[3]. You've claimed that the brain literally is a computer[4], and that there are aspects of it that aren't computational[5]. You've claimed that everybody agrees on the thesis of computational sufficiency[6], and maintained that nobody ever held that view[7], despite it being explicitly the topic of this thread.

Now, it might be that throughout all of this, you actually had a consistent thesis in mind. But if so, I don't think I'm overreaching when I say that you didn't do a great job of expressing it clearly. Consequently, I'm somewhat left grasping about at what, actually, it is that you think the relationship between computation and the mind is, in detail, how symbols acquire their semantics, and so on.

----------------------------------------------------

[1]
Quote:
Originally Posted by wolfpup View Post
Your demand for an example of a computation that uniquely computes f is not even meaningful, because it demands a computation that operates on semantics rather than syntax, which is counterfactual to what a Turing-equivalent computation fundamentally is.
[2]
Quote:
Originally Posted by wolfpup View Post
To claim that Watson doesn't do semantic analysis -- and moreover to try to justify that claim by equating it to a table lookup -- is, again, just philosophical sophistry.
Quote:
Originally Posted by wolfpup View Post
The Jeopardy question arrives as a string of symbols. The semantics derived from those symbols become obvious just as soon as Watson starts the process of query decomposition and hypothesis generation.
[3]
Quote:
Originally Posted by wolfpup View Post
as computations grow more complex, they themselves endow the symbols with semantics, and so evolves intelligence, both human and artificial, and none of it requires a little homunculus to observe it in order to make it real
[4]
Quote:
Originally Posted by wolfpup View Post
CCTM is precisely what Putnam initially proposed and was then further developed into a mainstream theory at the forefront of cognitive science by Fodor (bolding mine):
According to CCTM, the mind is a computational system similar in important respects to a Turing machine ... CCTM is not intended metaphorically. CCTM does not simply hold that the mind is like a computing system. CCTM holds that the mind literally is a computing system.
https://plato.stanford.edu/entries/c.../#ClaComTheMin
Quote:
Originally Posted by wolfpup View Post
as I showed in the quote in #196, CTM is absolutely not a metaphor and holds that mental processes are literally computations, and indeed Fodor laid out a detailed theory of exactly how those computations are carried out.
Quote:
Originally Posted by wolfpup View Post
It is directly contradicted by the quoted bit from the Stanford Encyclopedia of Philosophy, which actually goes out of its way to very explicitly define classical CTM as being precisely the theory that the brain is literally a computer (though I think most theorists today would prefer to say that mental processes are literally computational), and that CTM is not some mere modeling metaphor, say the way we model climate systems to better understand them.
[5]
Quote:
Originally Posted by wolfpup View Post
"Wholly computational" was manifestly never my claim, and I was clear on that from the beginning. And if it had been, I'd certainly never lean on Fodor for support, as he was one of the more outspoken skeptics about its incompleteness, despite his foundational role in bringing it to the forefront of the field.
[6]
Quote:
Originally Posted by wolfpup View Post
But where Chalmers agrees with all the others is on the following two foundational issues -- both of which I assume you consider to be complete nonsense that I should just stop posting about:
  • Computational sufficiency, stating that the right kind of computational structure suffices for the possession of a mind, and for the possession of a wide variety of mental properties.
  • Computational explanation, stating that computation provides a general framework for the explanation of cognitive processes and of behavior.

[7]
Quote:
Originally Posted by wolfpup View Post
But again, to avoid misunderstanding, no one claims that CTM alone is a complete description of everything about the mind.
  #540  
Old 07-07-2019, 12:40 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by Half Man Half Wit View Post
Which is the source of my confusion. You've variously claimed that computers don't deal in semantics[1], that Watson deals in semantics[2], and that computations deal in semantics once they become 'complex' enough[3]. You've claimed that the brain literally is a computer[4], and that there are aspects of it that aren't computational[5]. You've claimed that everybody agrees on the thesis of computational sufficiency[6], and maintained that nobody ever held that view[7], despite it being explicitly the topic of this thread.
You're not the only one that is confused. Yesterday I went through and scooped up those same quotes plus about 10 others and was going to post something similar today.
  #541  
Old 07-07-2019, 12:53 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by Half Man Half Wit View Post
The issue here isn't one of what, but one of how. ANNs are certainly computational, but they're considered a different form of computation---the term most often applied is 'sub-symbolic', as opposed to the symbol-manipulating Turing machine kind of computation. Both approaches are known to be equivalent in power, but that doesn't automatically imply that they're equally well suited for giving rise to minds.
Ok, but the symbol manipulating turing machine that implements an ANN (that does something) doesn't seem any different from the turing machine that implements a car simulation using non-ANN techniques. In both cases the same symbols are being used (e.g. 1's and 0's) and they have no intrinsic meaning (as this thread has shown).

It seems like the phrase symbolic computation is implying a higher level of abstraction, one in which the symbols are more directly mapped to real world entities?
  #542  
Old 07-07-2019, 01:38 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by Half Man Half Wit View Post
Which is the source of my confusion. You've variously claimed that computers don't deal in semantics[1], that Watson deals in semantics[2], and that computations deal in semantics once they become 'complex' enough[3]. You've claimed that the brain literally is a computer[4], and that there are aspects of it that aren't computational[5]. You've claimed that everybody agrees on the thesis of computational sufficiency[6], and maintained that nobody ever held that view[7], despite it being explicitly the topic of this thread.
That's an outrageous attempt to create hugely inaccurate juxtapositions of the things I've been saying, and using intentionally sloppy language to try to create the impression of inconsistency (what exactly does "deal in semantics" mean?). It's astonishing that you put that much effort into this pointless exercise.

The issues are complex and plain language is sometimes subject to ambiguities, especially when writing quickly, but it's hard to believe that there could be genuine misinterpretation to quite this extent. OTOH, you appear to have a good deal of inconsistency and backtracking yourself, going fairly rapidly from characterizing CTM as being "wrong" to being "useful" while nonetheless characterizing it incorrectly as merely a useful model, which is exactly NOT how it's generally regarded.

But on your specific points:

[1] is a straightforward statement of what Turing-equivalent computation is.

[2] is a statement about linguistic semantics (I notice here that the distinction you made earlier about the word "semantics" having different formal meanings in computer science than in general speech has been conveniently forgotten), and is the kind of observation frequently and correctly made about AI, that notwithstanding the fact that it works with apparently meaningless symbols, it nonetheless sometimes appears to express meaningful understanding of its problem domain. How this happens, or if it truly happens at all, is an ongoing philosophical debate, precisely the kind that Searle's Chinese Room argument was supposed to answer in the negative (but fails to do). It's certainly not a matter that can be dismissed out of hand or philosophers like Searle and Dreyfus would not have been going on about it for most of their careers. Dismissal out of hand appears to be your game, not mine.

[3] is just a restatement of [2].

[4] states that CTM is not just a useful model or a metaphor as you wrongly implied, but intended to be a literal description of cognition as a computational paradigm, or as Fodor put it, syntactic operations on mental representations. But as Fodor repeatedly said, and as I've said throughout, this doesn't mean that everything about the mind is necessarily computational, or at least that everything about the mind can be described by CTM, but only that many important cognitive processes can be so described. And even there, Fodor doesn't believe that his version of CTM as presently formulated is anywhere near a complete description.

[5] is perfectly consisent with what I just said in [4].

[6] and [7] is your feigning "confusion" over a matter I just finished explaining, that Chalmers' statement of computational sufficiency (a general principle) is not the same as CTM (a family of specific theories, acknowledged as far from a complete description of the mind).

As for "the topic of the thread", much of the discussion had segued into an argument about the nature of cognition, as I already pointed out but you chose to ignore, and that's a more specific argument and at least one that can be had based on tangible research. Whereas no one knows anything about consciousness. So most of my arguments, like yours, have been about the subjects of computation and cognition, not consciousness as such.

The one thing I will acknowledge here, in all frankness, is that I'm a bit more doubtful in retrospect that there would be quite as much universal agreement with Chalmers' "computational sufficiency" principle as I had implied. I do believe he's overstating the case by including "mind" in that statement, and I think a stronger argument (and one that would be much more widely supported) would be one where he had merely said, as I noted before, that "the right kind of computational structure suffices for the possession of a wide variety of [cognitive] mental properties".
  #543  
Old 07-07-2019, 02:23 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by wolfpup View Post
That's an outrageous attempt to create hugely inaccurate juxtapositions of the things I've been saying, and using intentionally sloppy language to try to create the impression of inconsistency (what exactly does "deal in semantics" mean?). It's astonishing that you put that much effort into this pointless exercise.
I put in the same effort independently but he posted first.

If two different people are independently seeing that much contradiction, is it possible that you may have a portion of responsibility in the miscommunication?

And you keep calling things "pointless" and "not relevant" when people try to clarify your position. Isn't that the preferred approach, to ask for a person to clarify their position vs continuing posting under incorrect assumptions?
  #544  
Old 07-07-2019, 02:42 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
wolfpup, here's another area where it's not clear what your position is due to posts that seem to imply different positions about "computation":

From a previous thread where you challenged me on my usage of the word "computation":
Quote:
Originally Posted by wolfpup View Post
What a calculator does is therefore qualitatively different from any claimed "calculation" that a single logic gate does, and what a computer does is another qualitative leap into something fundamentally and qualitatively different -- not just quantitatively different -- both from a calculator and from its individual logic gates. And what such a Turing-equivalent computer does is what we properly call "computation" in the context of computer and cognitive sciences, not the other stuff.
Quote:
Originally Posted by wolfpup View Post
...you haven't achieved computation in the sense of CTM until you have the basic functionality of procedures (stored-program algorithms) operating on abstract representations of the world stored in memory and producing output -- until you have, IOW, a limited Turing machine ("limited" in the sense of having finite memory). Even an analog "computer" such as existed in the 50s and early 60s doesn't qualify, because it solves problems by approximating phenomena on analog voltages and not by symbol-processing algorithms.

From this thread:
Quote:
Originally Posted by wolfpup View Post
Thus, a Turing machine, or an implementation of one using logic gates, which takes as input any two digits say in the range of 0 to 9 and whose output is their product is obviously performing a computation, but a program which knows nothing about arithmetic and which implements what back in my day in grade school was a "multiplication table" and generates the answer by table lookup is also doing computation. Not only is it doing computation, but according to my criterion, it is doing a computation exactly equivalent to the former, because it produces exactly the same mapping for all possible inputs.
and

Quote:
Originally Posted by wolfpup View Post
It would be a table of all possible switch positions, and the light pattern that is produced by each combination. Note that this table is objective and independent of interpretation, taking into account only the computational properties of the box.

Bolding and size in above quotes added by me.



In the previous thread, a calculator does not "compute" but in this thread, HMHW's box does compute.


If we can just get clarification about what is a computation and what isn't from your perspective, then we can proceed based on that clarification.
  #545  
Old 07-07-2019, 03:05 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by RaftPeople View Post
In the previous thread, a calculator does not "compute" but in this thread, HMHW's box does compute.


If we can just get clarification about what is a computation and what isn't from your perspective, then we can proceed based on that clarification.
I would think it would be obvious that the notion of "computation" has both formal and informal definitions, and any alleged contradiction just arises from this terminological ambiguity. A typical calculator doesn't compute in the formal Turing-equivalent sense (and neither does HMHW's box) because the formal sense of computation involves the concept of a stored program executing a series of stepwise syntactical operations on symbols, which Turing formalized as symbols on a tape.

Perhaps you can ask HMHW why he alleges that his box "computes" both the functions f and f' (and many others, according to the whim of the observer), since this is his example, not mine. My explanation is simply that it's a looser use of the term so that he can illuminate his argument with a simple instance of the alleged observer-dependency of symbolic representations, and I'm fine with calling it a "computation" for that purpose. But the box is clearly neither Turing equivalent or in any sense a stored program computer.
  #546  
Old 07-07-2019, 05:14 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by wolfpup View Post
I would think it would be obvious that the notion of "computation" has both formal and informal definitions, and any alleged contradiction just arises from this terminological ambiguity. A typical calculator doesn't compute in the formal Turing-equivalent sense (and neither does HMHW's box) because the formal sense of computation involves the concept of a stored program executing a series of stepwise syntactical operations on symbols, which Turing formalized as symbols on a tape.

Perhaps you can ask HMHW why he alleges that his box "computes" both the functions f and f' (and many others, according to the whim of the observer), since this is his example, not mine. My explanation is simply that it's a looser use of the term so that he can illuminate his argument with a simple instance of the alleged observer-dependency of symbolic representations, and I'm fine with calling it a "computation" for that purpose. But the box is clearly neither Turing equivalent or in any sense a stored program computer.
But a "computation" does not need to be performed by a turing equivalent system, it's just that the turing equivalent system is capable of computing anything that is computable.

A purpose specific system that is not turing equivalent can still perform the computations it was designed for.

You seem to be using a different definition than the one I see used by academics that I read. You seem to be saying that only turing equivalent systems perform computations, is this correct?
  #547  
Old 07-07-2019, 06:25 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by RaftPeople View Post
But a "computation" does not need to be performed by a turing equivalent system, it's just that the turing equivalent system is capable of computing anything that is computable.

A purpose specific system that is not turing equivalent can still perform the computations it was designed for.

You seem to be using a different definition than the one I see used by academics that I read. You seem to be saying that only turing equivalent systems perform computations, is this correct?
I don't know what useful objective you're pursuing with this line of interrogation, which started off with some strange accusation that I contradicted myself about what computation means. The appropriate definition at least partly depends on the context of the conversation. When discussing the Computational Theory of Mind, the prominent theorists that I know specifically rely on Turing's definition via his eponymous machine to define precisely what they mean, thus avoiding philosophical detours like whether a rock performs computations. If the academics that you read define computation some other way, you should try reading the ones who are concerned with CTM. Thus:
At its core, though, RTM is an attempt to combine Alan Turing’s work on computation with intentional realism (as outlined above). Broadly speaking, RTM claims that mental processes are computational processes, and that intentional states are relations to mental representations that serve as the domain of such processes. On Fodor’s version of RTM, these mental representations have both syntactic structure and a compositional semantics. Thinking thus takes place in an internal language of thought.

Turing demonstrated how to construct a purely mechanical device that could transform syntactically-individuated symbols in a way that respects the semantic relations that exist between the meanings, or contents, of the symbols. Formally valid inferences are the paradigm. For instance, modus ponens can be realized on a machine that’s sensitive only to syntactic properties of symbols. The device thus doesn’t have “access” to the symbols’ semantic properties, but can nevertheless transform the symbols in a truth-preserving way. What’s interesting about this, from Fodor’s perspective, is that mental processes also involve chains of thoughts that are truth-preserving.
https://www.iep.utm.edu/fodor/
  #548  
Old 07-07-2019, 07:31 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,729
Quote:
Originally Posted by wolfpup View Post
I don't know what useful objective you're pursuing with this line of interrogation,...
Having a common understanding of terms and people's positions seems like a pretty useful objective to support a good conversation.


Quote:
The appropriate definition at least partly depends on the context of the conversation. When discussing the Computational Theory of Mind, the prominent theorists that I know specifically rely on Turing's definition via his eponymous machine to define precisely what they mean, thus avoiding philosophical detours like whether a rock performs computations.
But Turing never stated that only a turing equivalent machine performs computations. He did establish that a turing complete machine could compute any computable function, but that says nothing about whether lesser machines perform computations or not.

So where are you seeing anyone claim that more limited machines like calculators can't be said to compute the functions they were designed to compute?

You seem to be claiming:
1 - The function of addition performed on a calculator is not considered a computation
2 - The function of addition performed on one of today's personal computers is considered a computation
  #549  
Old 07-07-2019, 08:34 PM
wolfpup's Avatar
wolfpup is online now
Guest
 
Join Date: Jan 2014
Posts: 11,068
Quote:
Originally Posted by RaftPeople View Post
You seem to be claiming:
1 - The function of addition performed on a calculator is not considered a computation
2 - The function of addition performed on one of today's personal computers is considered a computation
No. Both can be considered "computations" in the trivial sense in which "computation" is just synonymous with "calculation". They can also be regarded as computations in the equally trivial sense that both can be interpreted as operations on symbols. Turing's insights defined a much more formal notion of computation in terms of a Logical Computing Machine (LCM -- which became known as the Turing Machine) and its practical incarnation, the PCM, aka the Automatic Digital Computing Machine. The difference from a calculator is not in any one particular calculation, but in the fact that Turing's model describes a stored-program digital computer which executes a series of stored instructions and undergoes state transitions in response to the syntax of stored symbols. It became the iconic definition of what a stored-program digital computer is as opposed to a calculator.

While originally proposed to advance his theory of computable numbers, he later concluded that such a machine could make (non-numerical) logical inferences and ultimately exhibit intelligent behavior that was far beyond merely doing calculations. The explicit reference to the Turing machine in descriptions of CTM is to make it explicitly clear that this is what is meant by the "computational" part of Computational Theory of Mind. The description I quoted above in #547 giving the basic outline of Fodor's Representational Theory would not be possible without the explicit understanding that in this context this is what we mean by "computational". I don't know how I can possibly be any more clear than that.

Last edited by wolfpup; 07-07-2019 at 08:37 PM.
  #550  
Old 07-07-2019, 11:52 PM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,854
Quote:
Originally Posted by RaftPeople View Post
Ok, but the symbol manipulating turing machine that implements an ANN (that does something) doesn't seem any different from the turing machine that implements a car simulation using non-ANN techniques. In both cases the same symbols are being used (e.g. 1's and 0's) and they have no intrinsic meaning (as this thread has shown).
But if instantiating the relevant mental properties on a Turing machine always required the simulation of an ANN, then one might rightly hold that these properties are instantiated by virtue of the ANN's structure, rather than the TM's symbol-manipulation, it seems to me.

Quote:
Originally Posted by wolfpup View Post
That's an outrageous attempt to create hugely inaccurate juxtapositions of the things I've been saying, and using intentionally sloppy language to try to create the impression of inconsistency (what exactly does "deal in semantics" mean?). It's astonishing that you put that much effort into this pointless exercise.
I made that effort to impress upon you that, while I have no doubt it seemed to you like you were proposing a consistent story, your actual posts have made it hard to discern what that story is, and thus, answer appropriately. For instance, there may be a story where it's reasonable to both claim that 'the brain literally is a computer' and that 'the brain isn't wholly computational', but on the face of it, these are contradictory statements. Hence, my hope to get you to actually provide the story by highlighting what seemed contradictory to me.

Quote:
The issues are complex and plain language is sometimes subject to ambiguities, especially when writing quickly, but it's hard to believe that there could be genuine misinterpretation to quite this extent.
Which seems OK for you, but you immediately balk at my usage of plain language ('deal in semantics').

Quote:
OTOH, you appear to have a good deal of inconsistency and backtracking yourself, going fairly rapidly from characterizing CTM as being "wrong" to being "useful" while nonetheless characterizing it incorrectly as merely a useful model, which is exactly NOT how it's generally regarded.
My argument, from the start, has only been that there are some aspects of the brain---notably, its interpretational capacity---that can't be computational. From my very first post in this thread (relevant parts highlighted):
Quote:
Originally Posted by Half Man Half Wit View Post
if there's no fact of the matter regarding what mind a given system computes unless it is interpreted as implementing the right computation, then whatever does that interpreting can't itself be computational, as otherwise, we would have a vicious regress---needing ever higher-level interpretational agencies to fix the computation at the lower level. But if minds then have the capacity to interpret things (as they seem to), they have a capacity that can't be realized via computation, and thus are, on the whole, not computational entities.
This simply and rather explicitly argues that minds can't be completely computational, because they possess a capacity that can't be realized computationally. As soon as I noticed that you believed I was arguing for a rejection of computation-based cognitive science tout court, I tried to clarify---but to no avail, it seems.

Quote:
Originally Posted by wolfpup View Post
[2] is a statement about linguistic semantics (I notice here that the distinction you made earlier about the word "semantics" having different formal meanings in computer science than in general speech has been conveniently forgotten)
You told me yourself it's not relevant. Besides, earlier on, you agreed with me defining semantics in terms of the meanings of symbols:
Quote:
Originally Posted by wolfpup View Post
Quote:
Originally Posted by Half Man Half Wit View Post
Let's try and get at this another way. Take the two words 'dog' and 'Hund'. They're different symbols; yet, there is something that's the same about them, namely, what they denote---one is the general name for various canines in English, the other in German. So, there is some level beyond the merely symbolic (or syntactic), and that level is what we're talking about here.
You've just given a simple example of what semantics is.
But that's the same sort of semantics my box needs to have in order to compute any distinct functions---symbols (lamp or switch-states) mapped to their meaning (numbers).

Quote:
Originally Posted by wolfpup View Post
and is the kind of observation frequently and correctly made about AI, that notwithstanding the fact that it works with apparently meaningless symbols, it nonetheless sometimes appears to express meaningful understanding of its problem domain. How this happens, or if it truly happens at all, is an ongoing philosophical debate
But then, what's the relevance of appealing to Watson at all?

Quote:
Dismissal out of hand appears to be your game, not mine.
Right, you dismiss by calling people 'nitwits', instead.

Quote:
[4] states that CTM is not just a useful model or a metaphor as you wrongly implied, but intended to be a literal description of cognition as a computational paradigm, or as Fodor put it, syntactic operations on mental representations. But as Fodor repeatedly said, and as I've said throughout, this doesn't mean that everything about the mind is necessarily computational, or at least that everything about the mind can be described by CTM, but only that many important cognitive processes can be so described.
So, explain it! How does 'the brain is literally a computer' not mean that everything about it is computational? Because if by 'the brain is literally a computer' you just mean 'a part of the brain is literally a computer', then that's perfectly consistent with my argument, with the part that's not a computer supplying the interpretation of symbols. Otherwise, what is it that makes it a computer if it's not wholly computational?

Quote:
[6] and [7] is your feigning "confusion" over a matter I just finished explaining, that Chalmers' statement of computational sufficiency (a general principle) is not the same as CTM (a family of specific theories, acknowledged as far from a complete description of the mind).
And yet, you explicitly include Fodor in the list of people that agree with Chalmers regarding computational sufficiency:
Quote:
Originally Posted by wolfpup View Post
Fodor and Chalmers and of course nearly everyone in cognitive science supports the computational account of cognition, though they differ in their approaches. But where Chalmers agrees with all the others is on the following two foundational issues
And I hope you'll at least admit, in light of this, that yes, some people have claimed that computation is sufficient for mind, and it's that claim that my arguments are directed against.

Quote:
Originally Posted by wolfpup View Post
So most of my arguments, like yours, have been about the subjects of computation and cognition, not consciousness as such.
My argument, from the beginning, has been that there's at least one aspect of the mind that's not realized by computation. Nothing else. If you actually agree with that, I wonder why you ever decided to challenge it, and continued to do so even after my repeated attempts to point out that no, this doesn't overturn all of cognitive science.

Quote:
Originally Posted by wolfpup View Post
I would think it would be obvious that the notion of "computation" has both formal and informal definitions, and any alleged contradiction just arises from this terminological ambiguity. A typical calculator doesn't compute in the formal Turing-equivalent sense (and neither does HMHW's box) because the formal sense of computation involves the concept of a stored program executing a series of stepwise syntactical operations on symbols, which Turing formalized as symbols on a tape.
A computer need not be a stored-program device to compute. Turing's formulation picks out a range of functions that can be realized by mechanical computation; anything that implements any of these functions (which can be characterized without reference to Turing machines---for instance, via the Lambda calculus, or Gödel's recursive functions, and so on) is a 'computer' properly so called. Thus, the calculator is just as much a computer as any Turing machine (just not a universal one).

In fact, it's usual to disavow the necessity of programmability in the CTM:
Quote:
First, CCTM is better formulated by describing the mind as a “computing system” or a “computational system” rather than a “computer”. As David Chalmers (2011) notes, describing a system as a “computer” strongly suggests that the system is programmable. As Chalmers also notes, one need not claim that the mind is programmable simply because one regards it as a Turing-style computational system. (Most Turing machines are not programmable.)

Last edited by Half Man Half Wit; 07-07-2019 at 11:55 PM.
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 05:12 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright © 2019 STM Reader, LLC.

 
Copyright © 2017