Downloading Your Consciousness Just Before Death.

That’s compatible with computationalism not giving an account of how mental representation comes about, though—you take it that there are mental representations (whatever, exactly, those are), and that they’re manipulated via computation. That’s the perspective I take Shagrir to take, and he applies it to a case study of the computational explanation of vision—which is something that seems to genuinely produce novel insight, and which is just the kind of application of computation to cognition I have absolutely no problem with.

Think about the notion of mass: it’s absolutely fundamental to Newtonian physics, but that theory itself gives no account of it; it’s taken as a primitive property of matter. That doesn’t mean that the theory is useless—even without giving an account of what mass is, it is greatly illuminating on the subject of how mass behaves. The same can be true of cognitive science: without giving an account of what mental representations are, it can greatly illuminate how they are manipulated to produce the workings of our minds.

I have argued, and still am arguing, one thing only: there’s at least one capacity of minds that isn’t explained by computation, and that’s intentionality (I think the same is true of phenomenal experience—and I think the two are somewhat interwoven—but that’s another matter). I don’t believe that this marks the downfall of computational modeling; but it does mean that eventually, we’ll have to go beyond the notion of computation to explain the mind.

Then whatever do you mean when you say things like this:

The question of whether the box instantiates f or f’ is exactly the problem of the semantics of the symbols it uses—which, on computationalism, is the semantics of mental symbols.

There you go again eating your [del]cake[/del] box and then still have it too. :slight_smile:

Wait a sec – reality check here! You’ve repeatedly told us that the widespread acceptance of CTM is irrelevant, that Fodor was wrong, that widely accepted theories have been wrong before, and that CTM in fact amounts to being just like the caloric theory of heat (I’m surprised you didn’t compare CTM to phlogiston and alchemy!). Just a few snippets that I had the patience to look up – it’s particularly instructive to go back to your claims in some of the earliest posts:

Putnam long since dismantled CTM, and the rest of the world is just slow to catch up

Computational theories of mind imply infinite regress

that the utility of CTM is merely as a kind of model

Equating CTM with the archaic and discredited theory of caloric and that I got myself “all in a huff about [your] disagreement with Fodor”, with whom you now apparently agree after all.

And even more recently, “I’ve been presenting a widespread doubt about the computational theory of mind

Now all of a sudden you’re telling us that it’s a wonderfully useful theory with a great deal of explanatory power! Of course I understand the point that a theory can be useful and provide great insights even if it’s in some respects incomplete (fails to account for certain primitive properties), but surely you can see that it’s hard to avoid the conclusion that there’s a great deal of backtracking going on here relative to what you were saying before.

It also raises the serious question of just exactly what you now think CTM is. Is it just a useful model of something that can’t really exist in reality? Or does it describe a literal reality? Because if the latter, then no amount of equivocation can avoid the conclusion that your argument that the processes operating on mental representations have to be non-computational – because said representations possess instrinsic semantic properties – have to be discarded as simply wrong because they’re incompatible with that view. And we find, in fact, that many if not most proponents of CTM endorse that latter view: In Computation and Cognition, Pylyshyn argues that computation must not be viewed as just a convenient metaphor for mental activity, but as a literal empirical hypothesis.

I was just explaining how the argument has shifted from the currently unanswerable question of uploading the entirety of the human mind to an argument about the CTM, and that the latter is what I was defending. I make no claim that the latter in any way implies the former, mainly on the grounds that CTM is manifestly incomplete.

I don’t regard it as the same problem. In the seminal paper I cited on CTM, later expanded into a book on the subject, Pylyshyn freely acknowledges that the same computational states can represent multiple different interpretations in just this way, while still promoting a strong version of CTM. The difficulties lie in explaining the intrinsic semantics of mental representations in the mind and linking them to physical processes. Most cognitive scientists who support CTM would reject your claim that your simplistic example in any way proves that such processes cannot possibly be computational.

Did you know that Fodor himself thought that the cognitive mind was not computational, just the modules that performed specific functions are computational but not the higher level mind that integrates the results of the various modules and makes decisions/guides behavior?

From this paper (written by a computationalist) https://schneiderwebsite.com/uploads/8/3/7/5/83756330/the_language_of_thought_ch.1.pdf:
“Jerry Fodor, the main philosophical advocate of LOT and the related computational theory of mind (CTM), claims that while LOT is correct, the cognitive mind is likely noncomputational (2000, 2008)”
From this page Fodor, Jerry | Internet Encyclopedia of Philosophy

Fodor’s argument of modularity precisely parallels Pylyshyn’s argument of “cognitive impenetrability” wherein things like the the Muller-Lyer illusion persist even when it’s intellectually known that the lines are of identical length. The converse is also true: that the illusion does not exist in mental images. Both phenomena are actually supportive of CTM.

That didn’t answer the question that I asked.

Fodor thinks that human reasoning is NOT computational due to things like our ability to perform abductive reasoning.

More quotes about his views:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.528.6805&rep=rep1&type=pdf
“Slightly more precisely, he maintains that there is a fundamental tension between the local, syntactically determined character of classical computation and the global character of much human reasoning, especially abductive inference and planning.”
https://pdfs.semanticscholar.org/e941/6cb0b6581ebed07411e149d45fd288f3268e.pdf
“In The mind doesn’t work that way, Jerry Fodor argues that CTM has problems explaining abductive or global inference, but that the New Synthesis offers no solution, since massive modularity is in fact incompatible with global cognitive processes.”
My question to you:
Were you were aware that Fodor held these views?
Do you think that Fodor was right that global inference can’t be computational?
Or do you think that Fodor was wrong?

I have to say that these last few exchanges have left me a little confused on what, exactly, your position is. For instance, you claim:

And further:

But also:

These seem to be in flagrant contradiction to me. You seem to make the following three claims:

[LIST=a]
[li] Turing-machine equivalent computation does not operate on the semantic level.[/li][li] The semantic level is crucial to any serious theory of cognition—without it, such a theory doesn’t even get off the ground.[/li][li] Computationalism is a perfectly fine explanation of cognition, and in fact, the only one currently worth taking seriously.[/li][/LIST]

How do you reconcile these?

Again, I fail to see the problem. I stand by all I said, but, as I have been at pains to point out, I also don’t think that this hampers the utility of computational modeling in any way. I’ll try to be clear about this for one last time:

[ol]
[li] Yes, the CTM is wrong—like Newtonian mechanics. In particular, the claim of computational sufficiency, which, as Chalmers puts it, “holds that the right kind of computational structure suffices for the possession of a mind” (i. e. the position you claim nobody has ever seriously held), is false, as intentionality is a faculty minds have that can’t be performed computationally.[/li][li] Yes, the CTM is useful—like Newtonian mechanics. Within its domain of applicability, it provides genuine insight, and may even be indispensable (again, like Newtonian mechanics). [/li][/ol]

There is no contradiction whatsoever between the two.

This doesn’t follow. The processes that operate on mental representations can be fully computational, thus making the brain a computer in this sense, while whatever imbues these representation with content is not computational.

There are, evidently, computers whose symbols do not have any objective content. As Shagrir puts it:

The brain, however, then is a computer whose symbols do have an objective content:

Explaining the latter in terms of the former is hopeless; but leaving out the question of how representational content arises, we can consider the manipulation of these symbols, and, if those manipulations respect certain properties of the symbols, we can explain how computational manipulations yield transformations of mental content—i. e., cognition. This neither needs to nor aims to explain how that content arises.

Think about it as the distinction between the soundness and the validity of an argument. Take an argument of the following form:
[ol]
[li]All As are B.[/li][li]X is A.[/li][li]–> X is B.[/li][/ol]

It is valid by virtue of its syntactical structure; that is, its validity is independent of what the symbols A, B, and X mean. It also means that in carrying out the argument, we learn something about X—we have concluded something. Moreover, we can study what makes arguments valid, without having any notion of what the symbols used mean, nor, how they acquire this meaning. This is a completely worthwhile field of study, a large part of the science of logic.

Yet, in order to decide whether an argument says something true—whether it is sound—we need to know what the symbols mean. Only if the premises are true, is its conclusion guaranteed to be likewise. So, if ‘A’ is ‘swan’, ‘B’ is ‘white’, and ‘X’ is ‘Socrates’, then, we have not made a sound argument—there are black swans. But, on the other hand, if ‘A’ is ‘humans’, ‘B’ is ‘mortal’, and ‘X’ is ‘Socrates’, then the argument is perfectly sound—and we learn something about Socrates.

Computation and cognition may then stand in the same relationship. Computationalism tells us how the content of our minds is manipulated, including e. g. what sort of conclusions we draw from prior knowledge, without, however, telling us how the symbols that are being manipulated acquire their content.

This is the view that’s largely presupposed in the semantic view of computation (more or less explicitly). As Piccinini puts it:

That is, that a computational state is representational is taken as an essential—irreducible and not further analyzable—property of that state.

Piccinini is explicit about the consequent circularity of trying to ‘naturalize’ mental content in computational terms on the semantic view:

They’re exactly the same problem—how symbolic vehicles become associated with their semantic content.

The view of the computational theory Pylyshyn takes in that article is compatible with the view as outlined above (which, at least these days, seems to explicitly be regarded as the ‘received view’)—that the way in which the symbolic vehicles acquire their content is left open, but that their manipulation is done via computations, in order for which the syntactic properties must mirror some structure of the semantic properties—as in the case of logical arguments.

I hope it’s clear now that this isn’t the case. If not, Shagrir uses a much simpler example to make the point, that of a ‘brown-cow’ neuron, which spikes if its input neurons (a ‘brown’-neuron and a ‘cow’-neuron) both spike:

Consequently, whether the cell is an AND- or OR-gate is decided by the representational content of its symbolic vehicles (voltages). Likewise, whether my box implements f or f’ is determined by the semantic content of lamp-lights and switch-states. If these vehicles should somehow objectively carry semantic content (‘essentially’), then there would be no question regarding which function is being computed.

Nevermind.

HMHW, as I read about this topic, I see things that seem like simple errors, but I know these are smart people so it’s probably not as simple as it appears, can you shed some light?

1 - Computation vs Neural Networks
Given that ANN’s can be computed on the computers we have today, why do people seem to make a distinction between computation and neural networks, as if ANN’s are not considered to be computational?

2 - Fodor’s problems with computationalism (abduction/global reasoning)
His position must be based on an alternate definition of computation (or something) because, although I can follow his argument as presented (no wind belief requires context), it seems pretty easy to engineer a system that works around the problem using today’s computers.

Am I missing something?

HMHW, question related to the argument you’ve presented (not arguments related to qualia and experience, but related to the ability to assign meaning to symbols to perform computations):
Let’s pretend we ignore all the counter arguments and set out to build our general AI using today’s style of computers (but with more power).

If the argument you’ve presented is correct, in what way will our effort fail?
1 - Externally it may appear to work perfectly but internally it will be lacking something?
2 - Externally it will never be able to mimic the capabilities of a human?
3 - Externally it will be able to mimic the capabilities of a human, but the cost and energy requirements will be astronomical?

First of all, the second part of (b) is wrong. You yourself made the point that one can develop a comprehensive computational theory while deferring a full account of the semantics of mental representations. And if you’re going to claim you didn’t, I’ll explicitly make that point now.

Secondly, your attempt to imply that the first part of (b) constitutes an irresolvable dilemma is also wrong, as efforts to understand this problem have been part of the evolving history of cognitive science for decades. Fodor’s important book, Psychosemantics: The Problem of Meaning in the Philosophy of Mind, is a good example of that progress. To quote from a review of it, “… it not only defends our “commonsense” psychological practice of ascribing content or meaning to mental states (i.e., our assuming that they represent or are about objects other than themselves), but also provides the beginnings of a causal account of how such intentional states are even possible …”.

So there’s nothing to reconcile once one corrects your mistaken statement in the second part of (b), and indeed the Britannica article I cited acknowledges all three points without seeing any apparent contradiction. I would also note that all three points are orthodox in most formulations of CTM including Fodor’s Representational Theory of Mind, that point (c) is practically a verbatim quote from Fodor’s most recent book, and that Fodor himself was widely regarded as the most important philosopher of mind of the late 20th and early 21st centuries. So if you think those three points together are some kind of “gotcha”, you need to reexamine your premises.

Again, a significant point of correction is in order. There is certainly a contradiction between your view that CTM is “wrong” but can still be a useful model, and the explicit statement I cited earlier that it’s not just a model but intended to be a literal description of cognition.

But again, to avoid misunderstanding, no one claims that CTM alone is a complete description of everything about the mind. Its central premise is that most cognitive processes are computational in every meaningful sense of the word, just as defined by Turing and classically in computer science.

I actually think your analogy is a good one, but not in the way you intended. In order for the analogy to accurately reflect the kind of claim you’re making, there would have to have been a fundamental theoretical flaw in Newtonian theory observable right from the start, such that everyone knew the theory was wrong but used it anyway because they had nothing better. But in fact classical mechanics had wide and incontrovertible empirical support and as such can be regarded as not just a useful model but as empirically correct, and continues to be used to this day. This is so because the refinements introduced by theories of relativity and quantum mechanics are not relevant to common everyday experience, and because classical mechanics contains fundamental truths like Newton’s three laws of motion. And so it is with CTM, and I believe always will be, even as it gets refined.

Since we’re obviously never going to agree on any of this, and have each said about all that can usefully be said, I suggest we end this now. I do thank you for the large amounts of time you put into this, and I do understand your point; I just don’t see it as an obstacle to CTM. I think the Britannica article’s statement that “no remotely adequate proposal has yet been made” for bridging the gap between the syntax of computational symbols and the intentionality of mental representations might be a bit pessimistic; as the article itself notes, progress is being made on a number of different research fronts. A resolution to this problem would render moot your criticism, and that of Chalmers, Searle, Dreyfus, and other skeptics, who typically reject not only CTM but the whole notion of “real” computational intelligence, which I find to be a rather sadly pessimistic outlook. Fortunately we’ve already seen that Searle and Dreyfus and their ilk have been wrong about a lot of this.

Any response to this wolfpup?

If Fodor is correct then global reasoning/abduction is non-computational, and that is arguably the most important aspect of human intelligence.

Your point might have some merit against an argument that everything about the mind can be described computationally, but no one here has made that argument. As to Fodor’s views, the quotes I cited here, and others that I cited in previous conversations years ago, made it abundantly clear that not only did he not believe computational theory could provide a complete account of the mind, he didn’t believe it could provide (direct quote) “more than a fragment of a full and satisfactory cognitive psychology”, either. So the things you cite are in no way inconsistent with the argument I’m making about the central role of CTM in explaining cognition as part of an overall theory of mind.

So either Watson doesn’t and won’t ever perform global reasoning because Watson runs on a computer and global reasoning is non-computational, or Fodor was wrong about that point.

Which do you believe?

I believe that you should stop trying to create incoherent "gotcha"s that have no relevance to the discussion.

You think that Fodor’s position regarding global reasoning not being computational is not relevant to the discussion?

It seems like a pretty reasonable point and hardly a “gotcha” based on some trickery. Fodor flat out says that global reasoning is not computational. You have been posting in a way to make it seem like you think things like global reasoning IS computational, but it’s not really clear if you believe that or not.

Any of these responses is not really a big deal, not sure why you are so reluctant to be pinned down:
1 - I think Fodor was right, global reasoning can’t be computational for the reasons he states
2 - I think Fodor was right on other points but I disagree that global reasoning can’t be computqtional, I think he was wrong on that point
3 - I’m not sure, I’ve never really read his arguments about why he thinks global reasoning can’t be computational
If you choose position #1 then I would argue why I think that position is wrong (I believe it’s a simple engineering problem to work around his global context issue).

If you choose position #2 then I would agree with you that Fodor was wrong about his global reasoning argument.

The issue here isn’t one of what, but one of how. ANNs are certainly computational, but they’re considered a different form of computation—the term most often applied is ‘sub-symbolic’, as opposed to the symbol-manipulating Turing machine kind of computation. Both approaches are known to be equivalent in power, but that doesn’t automatically imply that they’re equally well suited for giving rise to minds.

To illustrate, AI got its start with so-called ‘expert systems’—essentially, long lists of ‘if-then-else’ statements (this often comes under the header ‘good, old-fashioned AI’ or GOFAI these days). In principle, you can rewrite every program in such terms, at least approximately. Yet, nevertheless, nobody thinks these days that it’s a good approach to AI anymore.

Rather, it’s largely been supplanted by machine learning techniques, such as deep neural nets and the like. A sizable contingent of philosophers have followed suit, and argue that their superior performance in this area is grounds for adopting them as a better explanatory model of the mind. This is a break with the computational theory as advocated by Fodor, since that is bundled up with the representational theory of mind, whereas in neural networks, you don’t have any immediate notion of representation in that sense—i. e. no symbols being tokened under certain circumstances (hence ‘sub-symbolic’). Furthermore, the rules according to which a neural network does its thing generally are implicit, and might be impractical to state explicitly—why an ANN categorized a certain image as that of a ‘cat’ is often largely opaque. Finally, the computation is distributed throughout the layers of the network, rather than, as in classical computationalism, modularised.

This issue, incidentally, has prompted the move towards ‘explainable AI’—AI which has a model of its domain, and hence, can tell you that it’s identified the thing on the picture as a cat by pointing to the presence of whiskers, a tail, four legs, and the like. DARPA calls this the ‘third wave’ of AI (the video is well worth watching).

There are those who believe that human-style thinking will require both approaches to integrate—and I think a good case can be made for that, by pointing to the dual process theory in psychology: in brief, there are two cognitive systems at work in the human brain, often called simply System 1 and System 2. System 1 is the sort of automatic, implicit, fast and generally non-conscious ‘intuitive’ assessment of situations and stimuli, whereas System 2 is deliberate, conscious, step-by-step reasoning towards a conclusion. So effectively, System 1 seems to work a lot like a neural network, whereas System 2 has an explicit modeling component.

I’m less sure about this.

In brief, I think the issue is similar to the so-called frame problem: roughly, an AI may do well in an artificially limited environment, by simply having explicit rules about its elements (think, again, GOFAI). But the real world is not so limited: there is an infinite variety of things that an AI let loose might encounter. How to cope with this variety is the frame problem.

Now, a similar issue exists with the extent of background information a system capable of addressing even a relevant part of the world must have. In order for its decision-making process to remain tractable, it can only take into account a subset of that background knowledge at a given time; otherwise, the computation it needs to perform simply wouldn’t be feasible. This then leads to the necessity of modularisation, with separated (encapsulated) cognitive systems processing separate parts of the problem (in parallel).

But the mind doesn’t really seem to work that way. Rather, it seems to be integrated, able to freely switch between content that ought to be associated with different modules. So how does this integration work? And even if we get that integration to work, where does creativity come from? How does the combination of domain-specific knowledge result in something new, which might not even cleanly map onto any specific domain (as in, for instance, coming up with entirely new concepts both in science and fiction)?

And then, finally, how are these new contents evaluated? If we have a certain set of domain-specific modules that each ‘care’ about their specific area, then, even if we somehow integrate their contents, and even if we are capable of producing something new from then, what module could rise to the task to evaluate whether we’ve come up with something appropriate? The new content doesn’t necessarily map onto the area of specialization of any of the given modules, so which one has the required capacities?

So it seems that we need modularization of the mind to make its capacities computationally tractable; but certain of its capacities seem ill-suited to a modular architecture. Fodor then claims that this is an issue that can’t be resolved (at any rate, within the sort of computationalism he defends). Interestingly, it’s been proposed that just the kind of hybrid systems I outlined above may be what’s needed to get around this problem.

I don’t really have any settled opinion on whether I consider this to be a real problem, or if so, if it’s fatal to (classical) computationalism.

My best guess is that if it’s able to effectively mimic the performance of a human, it won’t be doing so by means of computation. That doesn’t exclude that anything instantiating the right sort of computation (under some interpretation) also instantiates the right sort of mental properties, merely that those don’t reduce to the computation. That is, a hypothetical conscious robot will not be conscious via instantiating a certain computation, but via being a physical system with the right sort of structure.

Indeed, it was me who proposed that you could have a meaningful computational theory of cognition without the aim to account for semantics:

However, against that, you claimed the following:

Thus explicitly disavowing the notion that there could be a satisfying computational theory of cognition that doesn’t give an account of how semantics arises.

It’s this that threw me. You seem to simultaneously be claiming that computation inherently can give no account of semantics, that an account of semantics is absolutely essential to a satisfying theory of cognition, and yet, that computationalism yields a satisfying theory of cognition—which I still don’t see how to reconcile.

However, a causal account of intentionality isn’t a computational account—causality being a physical notion, not a computational one. On such a theory, it’s not, as you have variously claimed, the syntactical manipulation of symbols that provides them with meaning, but the additional notion of the tokening of these symbols being causally related to their semantic content. One might then appeal to such a theory—or any of a wide array of ‘naturalizations’ of mental content—in order to provide the meanings for representations that computation alone fails to issue.

It depends on what you mean by ‘literal description’. For instance, on the sort of view that computation is operation on representational vehicles, and that, indeed, computations are only individuated with respect to the semantic content of their representational vehicles (see Shagrir’s ‘Brown Cow’-example), without any sort of commitment to how they acquire their representational content, it might be a propos to call the brain ‘literally a computer’; but then, such a claim doesn’t entail something like the thesis of computational sufficiency above.

The brain would then be a computer, but it would be a different sort of computer than the one I’m now typing on. As Shagrir puts it, those computers ‘operate on symbols whose content is, indisputably, observer-dependent’. So we’d have two species of computation: one whose content is fixed (our brain), and one whose content is observer-dependent (every other computer).

I think that this is a terminologically inconvenient move. One can validly assume the position that what makes something a computer is merely how it handles the symbols it manipulates, in which case, one could argue that the brain does this handling in the same way as a desktop computer does, albeit using symbols that possess original, rather than derived, intentionality. But I think the issue here is really just one of terminology, and I think that the meaningful nature of mental symbols is enough of a difference to the interpretation-dependent symbols of ordinary computers to consider them different kinds.

You’ve claimed the exact opposite before:

According to this post, computationalists (all of them) agree on the thesis of computational sufficiency, which is exactly the thesis that computation suffices for mind.

And of course, the notion that the mind is wholly computational still is the basic issue of this thread, which is what I started out arguing against, and have continued to do.

Actually, that was a widespread view on Newton’s theory. The law of gravitation, in particular, postulated an action at a distance, without giving any account of how that action could be transmitted, something that greatly troubled Newton’s contemporaries (notably Descartes). Newton, in his General Scholium, essentially acknowledged this problem, but refused to ‘feign any hypotheses’—‘hypotheses non fingo’. Furthermore, he excluded such hypotheses on methodological grounds, claiming that they have no place in ‘experimental philosophy’.

So there is a central part of the theory, whose working isn’t explained by the theory itself, and which still didn’t lead to any problems with the use of the theory.

Of course, modern theories have since stepped in—General Relativity could do away with the action at a distance, and explained this primitive notion of Newtonian mechanics from more fundamental principles of how matter influences spacetime.

It depends. A computational solution would alleviate the issue I see, but a solution that essentially depends on non-computational concepts would merely affirm it.

Look, it’s just a fact that computational theories of mind which hold that mental processes are syntactic operations on mental representations are well established and widely accepted in numerous pertinent fields, while the nature of these mental representations continues to be a work in progress, and that’s the point I was making. The second part that you think is contradictory was just my interpretation of what I believed YOUR position to be, namely that any such computational theories are just models that use the computational paradigm as a metaphor, and that this couldn’t possibly be how cognition really works – and I subsequently cited numerous examples of your hostility to CTM.

No, I’ve cited Fodor numerous times (in this thread, but also long prior to this thread) as clearly stating that CTM is very far from a complete description of the mind, and in fact far from a complete description of cognitive psychology. Chalmers’ statement that “the right kind of computational structure suffices for the possession of a mind, and for the possession of a wide variety of mental properties” asserts that such a computational structure – which we are far from adequately describing in any computational theories we have today – is in no way inconsistent with a statement about the limitations of present theories. That said, while there’s no contradiction there, I think Chalmers probably overstated the case; a more conservative statement would leave out the mention of mind and say that “the right kind of computational structure suffices for the possession of a wide variety of mental properties”.

Which is the source of my confusion. You’ve variously claimed that computers don’t deal in semantics[sup][1][/sup], that Watson deals in semantics[sup][2][/sup], and that computations deal in semantics once they become ‘complex’ enough[sup][3][/sup]. You’ve claimed that the brain literally is a computer[sup][4][/sup], and that there are aspects of it that aren’t computational[sup][5][/sup]. You’ve claimed that everybody agrees on the thesis of computational sufficiency[sup][6][/sup], and maintained that nobody ever held that view[sup][7][/sup], despite it being explicitly the topic of this thread.

Now, it might be that throughout all of this, you actually had a consistent thesis in mind. But if so, I don’t think I’m overreaching when I say that you didn’t do a great job of expressing it clearly. Consequently, I’m somewhat left grasping about at what, actually, it is that you think the relationship between computation and the mind is, in detail, how symbols acquire their semantics, and so on.


[1]

[2]

[3]

[4]

[5]

[6]

[7]

You’re not the only one that is confused. Yesterday I went through and scooped up those same quotes plus about 10 others and was going to post something similar today.