If that’s the case, then you should be able to point out its flaws. But instead, what you’re doing is basically assuming it’s wrong, somehow, because something might occur to make it wrong, somehow.
Your position essentially amounts to an unfalsifiable hope. No matter what I say, how many systems I show where it’s patently obvious that their computational interpretation isn’t fixed, you can always claim that just beyond that, stuff will start happening.
But without any positive argument to that end, you simply haven’t made any contenful claim at all. For any example of emergence, you can always point to the microscopic properties grounding the emergent properties. The hydrogen bonds leading to water’s fluidity. The rules any member of a flock follows to generate large-scale flocking behavior.
You offer nothing of the sort; you point to emergence and complexity as essentially magically bringing about what you need. But there’s no reason to believe it will, beyond faith.
On the other hand, I have given reason to believe that computation is not an objective notion—by example. It’s also straightforward to show how making the system more complex leads to increasing the underdetermination: add a switch, and the number of possible interpretation will grow as the number of possible ways to associate inputs with outputs, with no mitigation in sight.
I acknowledge that emergence is a powerful notion. But it’s not a one-size-fits-all magical problem solver. The properties at the bottom level determine those at the higher levels; anything else is just magical thinking. Any claim towards the emergence of mind must thus be bolstered with at least a candidate property that might give rise to the mental. Absent this, there simply is no challenge for me to meet, because you’ve just failed to make any contenful claim whatever.
First, a preface. I’ve had a number of discussions with you and enjoyed all of them, and I’ve learned a lot, particularly about quantum mechanics. And for that, thank you.
Now on this, I have pointed out its flaws, several times, and the second part of that quote is just flat-out wrong. You’re describing the phenomenon generally referred to as weak emergence – properties that may or may not be inferrible from those of the constituent components. Now, while I may not agree with David Chalmers on various issues, at least he has his definitions right on strong emergence [PDF, emphasis mine]: “We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain.”
I would then point out the example of an electronic calculator that is built from logic gates. It performs calculations, but it’s not hard to show that there is nothing about this device that is even remotely “intelligent” in any meaningful sense of the word. It does arithmetic. One could even posit creative multiple interpretations of its results that are other than arithmetic. It’s a lot like your box with lights and switches.
But in a very true and fundamental sense, systems like Deep Blue and subsequent chess champions were built out of the same kinds of logical devices. So was Watson, the Jeopardy champion. And they are fundamentally and qualitatively different systems from any calculator. Would you like to posit an alternative interpretation of Watson’s computational results? You can’t, not because it’s such a complex system that it’s hard to do, it’s because it’s qualitatively a different kind of system entirely.
Strong emergence is a very contentious notion, and to be honest, having to appeal to it rather weakens your position. While it’s true that a strongly emergent property can’t even in principle be inferred from lower-level properties, that also means that knowledge of the lower-level properties can never yield sufficient reason for belief in strongly emergent features—so we’re back with faith.
On the whole, the main idea behind computationalism and other physicalist ideas is essentially a rejection of such notions. So until you can point to any example of strong emergence (and no computer ever will yield one, since the breaking down of their large-scale ‘intelligent’ behavior into elementary logical operations is kind of their point, and their very computational nature entails the deducibility of this behavior from the lower level), the default position ought to be a strong skepticism. I tend to agree with this:
From the paper of Chalmers you linked to: “Strong emergence, if it exists, can be used to reject the physicalist picture of the world as fundamentally incomplete.”
I tend to want to keep the option of physicalism alive. If you call yourself a computationalist, I would have expected that so do you. So do you believe that Deep Blue falsifies physicalism?
Of course not. And Chalmers is not saying that strong emergence is a rejection of physicalism, he’s saying it undermines it as a complete description, meaning AIUI that there arises the possibility of some system that is identical to another system in all physical respects, yet differs from it in some observable functional/behavioral aspect. Not being a believer in magic or mysticism, I think this is nonsense. Each and every behavioral aspect, whether in a human or a machine, has a corresponding physical mental or computational state.
That state, however, might not be found in any of its discrete components. It might only be found in some vague network of interconnections between distant neurons, or the data paths between software modules, or new data structures that the system itself developed, any of which might have been dynamically established (the latter perhaps in a manner unknown and unpredicted by the designers). Actually a very simple example with Watson was simply the result of its extensive training exercises. In a real sense, no one fully understood what the hell was going in there in terms of the detailed evolution of its database as it was being trained, but the system was gradually getting smarter.
I note BTW that Chalmers also cites consciousness as the only known example (in his view) of strong emergence. I’ll leave you guys to fight that out, since you objected to that idea so strongly!
Well, you can’t have your cake and eat it. Either, the physical facts suffice to determine all the facts about a system: then, there’s no strong emergence. Or, they don’t: then, physicalism is wrong.
Computers are essentially paradigm examples of weak emergence (so much so that it’s often defined in terms of what a computer simulation of a system includes). Witness Bedau’s definition of weak emergence in a system S (original emphasis):
All computers ever do is to deduce higher-level facts (their behavior) from lower-level facts (their programming). You could print out Watson’s machine code, and everything it does follows from those instructions; and, while no human being is likely smart enough to perform the derivation, a sufficiently advanced intellect (think Laplace’s demon) would have no trouble at all to predict how Watson reacts in every situation. The very fact that Watson is a computer ensures it to be so, as it entails that there’s another computer capable of simulating Watson.
So computationalism can never include strong emergence. That would mean to both believe that a computer could simulate a brain, leading to conscious experience, and that a computer simulation of a brain would lack certain aspects of a real mind (the strongly emergent ones).
I have no qualms with Chalmers; he puts forward a consistent position by acknowledging the rejection of physicalism his belief in strong emergence entails. I still think he’s wrong, but there’s a substantial question to whether he is.
I was thinking about functions other than consciousness performed by the brain and I’m wondering how they fit in with this argument against computation being the basis of consciousness.
For example, consider circadian rhythm computations:
Relative to it’s environment (sensory input, other functional components of the brain), it serves a specific purpose. Despite the fact that your box example applies when this function is viewed in isolation, when viewed relative to it’s surrounding environment, the specific purpose becomes realized.
Why is consciousness different than the circadian rhythm function?
Can’t we say that consciousness is just a particular transformation relative to it’s surrounding environment (the inputs into the consciousness function and the outputs from the function)?
The conversation has ranged pretty far, but suffice to say that my biggest problem is one of terms. Specifically you are using lots of terms that don’t mean the same thing to me that they appear to mean to you. It seems entirely possible that these are terms of art that are well understood within the discipline but that I, the layman, have never encountered. Of course the heavy use of such terms without defining or explaining them ensures that I’m staying a layman.
Take “external agency”. As far as I know I never said anything about an external agency. I’m saying that the cognitive calculation itself understands the symbols it uses, and that the way it processes those symbols is largely or wholly deterministic based on the state of the cognitive process itself at that moment in time.
And that fact means that you can copy a cognition even if you don’t have any idea how it works, simply by ensuring that your copy has the same cognitive calculation operating on comparable hardware with an identical copy of the ‘symbols’. You may not have any idea what those symbols you just copied over mean, but the copy of the cognitive process knows what to do with them, because it’s an identical copy of the original cognitive process that knows what to do with them.
It’s worth noting that it doesn’t matter how the cognition works - so long as you accurately copy the cognition process and memory state, the copy will ‘work’. You have to copy the whole cognition process and memory state, of course - if the person in question has a soul and you forget to copy the soul then your copy will fail to operate properly to the degree the soul was necessary for operation. But as long as you copy all the parts and memory correctly you’re all good.
You do seem to be very interested in talking about reverse-engineering the cognition and the difficulties in doing so, and while that’s certainly an interesting topic, this thread is specifically about copying cognition. And you don’t have to understand how something works to copy it, as long as you know enough to not fail to copy over something important.
If I understand you correctly, then this reply doesn’t work: what matters with respect to the environment is what I’ve earlier on called the stimulus-response behavior; but that’s just given by a line of causality connecting both. The computation, such as it is, doesn’t change it, but—again—is merely an interpretive gloss over and above the physical behavior.
So in my box example, stimuli come in the form of switch flips, and responses are given in the form of lights either lighting up or not. No matter what computation I take the system to perform, this stimulus-response behavior remains the same; so the interaction with the environment is blind towards anything else.
The same goes for things like the circadian rhythm. Nothing is, ultimately, being computed; it’s merely the physical evolution in response to causal factors that triggers, say, time-sensitive hormone release. Picture one of these old wind-up kitchen timers: they don’t compute the time to ring, the spring merely winds down after a certain time (response), provided it’s been wound up (stimulus).
Understood.
But, how do we know consciousness isn’t just a set of transformations that just happen to help the machine successfully respond to it’s environment in the same way that the circadian rhythm function does? It may feel to us like it’s something more, but maybe it’s not.
In that case, I believe I misunderstood you. I thought that the ‘brain’ you were referring to was that of an external observer, who interprets the lights (as brains typically don’t have lights). But I understand now that you intended to use ‘lights’ metaphorically, for whatever symbolic vehicles the brain itself carries (right?).
However, this reply simply won’t work, either. If it’s the computation that’s supposed to fix what the brain computes, then we have a chicken-and-egg problem: for a brain to give rise to a mind, it must, on computationalism, implement some computation M. I have now argued that whether a brain (or any physical system) implements a computation is not an objective property of that brain, and thus, subject to interpretation.
To that, you reply (if I understand you correctly) that so what, to the mind produced by the brain, what computation is being performed is perfectly definite, it’s just that an outside observer can’t tell which. But that’s circular: in order for the mind to fix the computation like that, it would first have to be the case that the brain, indeed, gives rise to that mind; but for that, it must implement computation M. So before it can be the case that the ‘cognitive calculation’ itself understands the symbols it uses, it must be the case that the brain performs that ‘cognitive calculation’ (i. e., implements M). So your reply appeals to the brain implementing M in order to argue that it implements M.
The thread, as I understand it, is about a specific kind of copying, namely, via download; this implies the instantiation of the mind within a computer. This, however, is not possible.
I have no qualms with an exact physical replica of my brain being conscious in the same way as I am, if that’s what your saying.
That’s a different claim from the one computationalism makes, though. I’m not entirely sure about what kind of claim, though—what, exactly, do you mean by consciousness being ‘a set of transformations’? To me, consciousness is far more a certain way of being, namely, one where it is like something to be me, where I have qualitative subjective experience. What kind of transformations are you thinking of?
I disagree, but then I probably have a different notion of “emergence” than philosophers like Chalmers. A system can certainly have properties that are not present in any of its components yet are still embodied in its physicality. One simply posits that such properties arise from the arrangement of those components, meaning the interconnections between them, and indeed that’s the only place that real emergent properties can reside. This arrangement may be by design, or it may be a product of the system’s own self-configuration.
This is roughly what happens when logic gates are assembled into a digital computer. The business of being able to “infer” from component properties what the properties of the aggregate system will be is really rather nebulous and arbitrary, and consequently so is the distinction between weak and strong emergence, IMO. One might readily infer that since logic gates switch signals according to logical rules, it’s reasonable to expect that the resulting system would be an ace at doing binary arithmetic. But is it reasonable to infer on that same basis that those same logic gates would be the foundation for a system capable of playing grandmaster-level chess, long believed to be the exclusive domain of a high caliber of human intelligence? Or one that would beat Ken Jenning at Jeopardy? If so, Hubert Dreyfus and a whole following of like-minded skeptics sure as hell didn’t infer it!
The conclusion here, taking into account all that it implies, is so wrong in my view that it doesn’t seem sufficient to say that I disagree with it; I respectfully have to say that I’m just astounded that you’re saying it. Wrapped up in that statement – some of which I extrapolate from your earlier claims – appear to be the beliefs that (a) nothing (or at least nothing of cognitive significance) in the brain is computational, (b) a computer can never simulate a brain, and (c) a computer can never exhibit self-awareness (consciousness). All of which are wrong, in my view, though they are increasingly arguable. But the first of those, if taken seriously, is a flippant dismissal of the entirety of the computational theory of cognition, one that Fodor has described as “far the best theory of cognition that we’ve got; indeed, the only one we’ve got that’s worth the bother of a serious discussion”.
And the basis for your bizarre conclusion appears to be the belief that any computation requires an external agent to fix an interpretation – a belief that I maintain has already been disproved by the simple fact that all interpretations that are consistent with the computational results are all exactly equivalent. The claim that the mind cannot be computational because of the “external agent” requirement is a futile attempt to parallel the homunculus argument as it’s sometimes applied to the theory of vision. Clearly, however, vision is actually a thing, so somewhere along the line the attempt to prove a fallacy has gone off the rails. Likewise with your claim about the computational aspects of cognition. It’s exactly the homunculus fallacy, and it’s a fallacy because computational results are intrinsically objective – that is to say, they are no more and no less intrinsically objective than the semantics attached to the symbols.
Your first paragraph here is also frustrating to read. It is, at best, just one step removed from the old saw that “computers can only do what they’re programmed to do”, which is often used to argue that computers can never be “really” intelligent like we are. That’s right, in a way: the reality is that computers can be a lot more intelligent than we are! The fact that in theory a sufficiently advanced intellect or another computer, given all the code and the data structures (the state information) in Watson, could in fact predict exactly what Watson would do in any situation is true, but it’s also irrelevant as a counterargument to emergence because it’s trivially true: all it says is that Watson is deterministic, and we already knew that.
But here’s the kicker: I would posit that exactly the same statement could be made in principle about the human brain. In any given situation and instant in time one could in theory predict exactly how someone will respond to a given stimulus. There’s merely a practical difficulty in extracting and interpreting all the pertinent state information. Unless you don’t believe that the brain is deterministic – but that would be an appeal to magic. This is aside from issues of random factors affecting synaptic potential, and changes therein due to changes in biochemistry, and all the other baggage of meat-based logic. But those are just issues of our brains being randomly imperfect. That a defective computer may be unpredictable is neither a benefit nor an argument against computational determinism.
A final point here, for the record, is that in retrospect the digression about strong emergence was a red herring. The kinds of things I was talking about are better described as synergy, which is synonymous with weak emergence, if one wants to bother with the distinction at all. The impressive thing about Watson is not particularly the integration of its various components – speech recognition, query decomposition, hypothesis generation, etc. – as these are all purpose-built components of a well-defined architecture. The impressive thing is how far removed the system is from the underlying technology: the machine instructions, and below that, the logic gates inside the processors. The massively parallel platform on which Watson runs is very far removed from a calculator, yet in principle it’s built from exactly the same kinds of components.
The principle here is, as I said earlier, that a sufficiently great increase in the quantitative nature of a system’s complexity leads to fundamental qualitative changes in the nature of the system. Among other things, dumb, predictable systems can become impressive intelligent problem-solvers. This, in my view, is the kind of emergence that accounts for most of the wondrous properties of the brain, and not some fundamentally different, mysterious processes.
Consciousness is very likely just another point on this continuum, but we’ve thrashed that one to death. Marvin Minsky used to say that consciousness is overrated – that what we think of as our awesome power of self-awareness is mostly illusory. Indeed, we obviously have almost no idea at all of what actually goes on inside our own heads. IMHO Chalmers’ physicalism arguments about it are just philosophical silliness. Where does consciousness reside? Nowhere. It’s just a rather trivial consequence of our ability to reason about the world.
Just want to correct rather a whopper of an omission there. That should say “One simply posits that such properties arise from the arrangement of those components, meaning the interconnections between them, or in the states in and among them, and indeed that’s the only place that real emergent properties can reside”.
Pretty much. My argument is that the brain can interpret its own symbols, and an exact copy of that brain can interpret those same symbols with the same effects.
Why does the symbol ‘3’ refer to the number three? It just does. It’s the symbol we’ve collectively chosen. I have to appeal to the system of arabic numbering in order to argue what the system of arabic numbering is, because I have no other cite for the definition of ‘3’ besides the definition of ‘3’. Yes I know that there are historical reasons that the drawing of a butt means ‘three’, but I don’t know those historical reasons and that doesn’t prevent me from using 3 as a symbol.
A given mind, a given cognition, knows what it means by a given symbol/encoding. Let’s call that cognition M. M happens to function in such a way that red tennis shoes are encoded with a specific code which I’ll refer to as RTS. Other cognitions might interpret the RTS code differently, perhaps interpreting it to mean ‘real-time strategy game’, but that’s not a problem for cognition M - M knows that RTS means ‘red tennis shoes’. Why does RTS mean red tennis shoes? Because it does. That’s the symbol M uses for that.
There’s absolutely no part of this that I perceive to be a problem. If you are seeing a problem here, then I posit either you are talking about something different than this, or there’s a problem with your logic.
Of course it’s possible, theoretically speaking. To say otherwise is absurd, because theoretically speaking you can emulate a model of reality itself (or at least a local chunk of it) within the computer. Theoretically speaking you can theorize that you have all the memory and processing power you need to emulate, say, a 10’x10’x10’ room at the level of the physical behavior of the elementary physical particles. And that 10’x10’x10’ room could include within it a copy of you.
Which means that, theoretically speaking, you absolutely can create an exact physical replica of your brain within the simulation within the computer. So you bet your bunions that your cognition is digitizable.
No. It’s not the same thing. There is a continuity of brain wave activity. That is satisfactory for me. If your saying that the transfer could happen during “sleep” I’d balk.
So, does it bother you at all that you have to flat out contradict yourself in the span of three posts to try and save your position?
Well, it’s really not, though. I gave the definition (or at least, a very widely accepted definition) above—if you can discover it via simulation, it’s (at best) weakly emergent. The reason for this definition is that in such a case, you only need to apply rote operations of logical deduction to get at the ‘emergent’ properties; so there’s nothing new in that sense. The higher-level properties stand to the lower-level properties in a relation of strict logical implementation, or, in other words, there’s nothing qualitatively new whatsoever.
This isn’t something I’ve made up, you know. But I don’t think heaping on more cites would help any, seeing how you’ve already not bothered to address the one I provided.
Dreyfus might have been wrong on some things, but even most proponents of the possibility of strong artificial intelligence today acknowledge that his criticisms against ‘good old fashioned AI’ (GOFAI) were largely on point. Hence, the move towards subsymbolic and connectionist approaches to replace expert systems and the like.
But that’s rather something of a tangent in this discussion. The basic point is that, of course it’s reasonable to think of chess playing as being just the same kind of thing as binary arithmetic. After all, that’s what a computer program for playing chess is: a reduction of chess playing to performing binary logical operations. Really complicated ones, but again, that’s the definition of a difference merely in quantity, not quality.
In contrast to you, however, I’m not merely saying it, stating my ideas as if they were just obvious even in the face of widespread disagreement in the published literature, but rather, provide arguments supporting them. Which are then summarily ignored as my interlocutors just flat out state their positions as if they were just obviously true.
I’m sure some luminary once described caloric as the best theory of work and heat we’ve got. But that doesn’t mean there’s anything to that notion.
I have addressed that issue, conclusively dispelling it: if you consider the computations I proposed to be equivalent, then computationalism just collapses to naive identity physicalism. Besides of course the sheer chutzpah of considering the manifestly different functions I’ve proposed, which are different on any formalization of computation ever proposed, and which are just quite obviously distinct sorts of operations, to be in any way, shape, or form, ‘the same’. The function f’ is not binary addition, but it, just as well as addition, is obviously an example of a computation. That I should have to point this out is profoundly disconcerting.
The homunculus argument succeeds in pointing out a flaw with certain simple representational theories of vision, which have consequently largely been discarded. Pointing out the occurrence of vicious infinite regresses is a common tool in philosophy, and all I’m doing is pointing out that it trivializes the computational theory of mind.
This is a bizarre statement. Quite clearly, the semantics of symbols is explicitly subjective. There is nothing about the word ‘dog’ that makes it in any sense objectively connect to four-legged furry animals. Likewise, there is nothing about a light that makes it intrinsically mean ‘1’ or ‘0’.
Sure. That’s not much of a kicker, though. After all, it’s just a restatement of the notion that there’s no strong emergence in the world.
It doesn’t really matter, but indeterminism doesn’t really entail ‘magic’. On many of its interpretations, quantum mechanics is intrinsically indeterministic; that doesn’t make it any less of a perfectly sensible physical theory.
And that’s not surprising in any way, because it’s doing qualitatively exactly the same kind of thing—just more of it.
Look, I get that the successes of modern computers look impressive. But for anything a computer does, there’s a precise story of how this behavior derives from the lower level properties. I might not be able to tell the story of how Watson does what it does, but I know exactly what this story looks like—it looks exactly the same as for a pocket calculator, or my device above. Describing the functional components of a computer enable us to see exactly what computers are able to do. Turing did just that with his eponymous machines; ever since, we have exactly known how any computer does what it does. There’s no mystery there.
That’s the sort of story you’d have to tell to make your claim regarding the emergence of consciousness have any sort of plausibility. But instead, you’re doing the exact opposite: you try to use complexity to hide, not to elucidate, how consciousness works. You basically say, we can’t tell the full story, so we can’t tell any story at all, so who knows, anything might happen really, even consciousness. It’s anyone’s guess!
You keep claiming this, but any example you give establishes the exact opposite: that there is no fundamental qualitative difference between the components and the full system. The components exactly logically entail the properties of the whole; so they are fundamentally the same kind of thing.
Nevermind that even if there were some sort of qualitative difference, this would still not make any headway at all against my argument (that, I’ll just point out once again for the fun of it, you still haven’t actually engaged with)—at best, the resulting argument would be something like: the simple example system doesn’t possess any fixed computational interpretation; however, qualitatively novel phenomena emerge once we just smoosh more of that together. So maybe some of these qualitatively novel phenomena are just gonna solve that problem in some way we don’t know.
That is, even if you were successful in arguing for qualitative novelty in large-scale computational systems, the resulting argument would at best be a fanciful hope.
And that’s already where things collapse. If the interpretation of this symbols is based on computation, then the symbols must already be interpreted beforehand, or otherwise, there just won’t be any computation to interpret them.
Sure. But the question is, how does this choosing work? How does one physical vehicle come to be about, or refer to, something beyond itself? In philosophical terms, this is the question of intentionality—the mind’s other problem.
The problem is, rather, that M is tasked with interpreting the symbols that make the brain compute M. Consequently, the brain must be computing M before M can interpret the brain as computing M. Do you see that this is slightly problematic?
Well, that’s just massively question-begging. In point of fact, nothing ever just computes anything; systems are interpreted as computing something. I mean, I’ve by now gotten used to people just ignoring the actual arguments I make in favor of posturing and making unsubstantiated claims, but go back to the example I provided. There, two different computations are attributed, on equivalent justification, to one and the same physical system. Thus, what computation is performed isn’t an objective property of the system anymore than the symbol ‘3’ denoting a certain number is a property of that symbol.
No. You can interpret certain systems as implementing a simulation of a brain. That doesn’t mean the system actually is one. You can interpret an orrery as a model of the solar system. That doesn’t mean it actually is a solar system.
All of this is just a massive case of confusing the map for the territory. What you’re saying is exactly equivalent, for example, to saying that there are certain symbols such that they have only one objectively correct meaning.
Are you asserting that it’s impossible for any computational system to assign symbols to new concepts as it encounters them? Because computer programs do that all the time. And when I say “all the time”, I mean that literally - there are databases and hashsets creating new records with associated programmatically assigned keys constantly.
Let me respond to all this in a very simple way - one of two things is happening here. You are either:
positing that minds can’t possibly work and are entirely fictional, which I believe can be dismissed as absurd based on observations,
or
positing that we can’t reverse engineer brains operation through external observation, which is irrelevant and off topic because we don’t have to know how they work to copy them if we copy them at the physical level to the smallest excruciating detail.
Because of your persistent use of undefined technical terms I’m not quite sure about which of these you are doing, but either way I don’t care - you can’t prove brains aren’t copyable either way.
We’re not talking about orrerys and maps, and you know it - we’re talking about functionally exact copies. At a functional level the digital copy would operate exactly the same way the original physical person did. So forget all the crappy analogies, please.
From the perspective of the copy, the duplication is exact, down to the smallest detail. Every neuron and chemical and electron is in place, acting exactly like their equivalents in the physical world. It’s essentially the ‘prosthetic neuron replacement’ scenario from earlier in the thread - the prosthetic neurons (and everything else) are simulated entities, but they replicate the functionality of the things they replace perfectly.
Simulations seek to replicate the behavior and outcomes of what they simulate. The more accurate the simulation, the more accurate the outcomes are to the real thing. Here we theoretically posit a perfect simulation of the physical reality that the brain (and the body that houses it) exist in. Basically the Matrix, except the brain cells are inside the simulation too. And presuming a materialist real universe, there is no coherent and informed argument that the simulation couldn’t be accurate to the finest detail - including the behavior of the simulated people, driven by their simulated brains and the minds contained within them.
No. And, frankly, your continued misconstrual of my argument is somewhat disconcerting to me. I’ve given an explicit example of what I’m arguing in post #93: there, I have shown how one and the same physical system can be considered, on equal grounds, to perform distinct computations (binary addition—f—and the function f’).
Hence, what computation a physical system performs—whether it performs any computation at all—isn’t an objective matter of fact about the physical system. You can’t say the device I proposed performs binary addition; you can only say that you can interpret it as such.
But then, a dismissal of the computational theory of mind follows immediately. Computationalism holds that brains give rise to minds via performing the right sort of computation. But if it’s the case that brains don’t perform any computation at all, unless they are interpreted in the right way, then that whole thing collapses.
So whether or not a computational system assigns symbols to concepts, or what have you, is entirely beside the point. The point is that there’s no such thing as a computational system absent an interpretation of a physical system as a computational system.
The analogy is exact in the one respect that matters: neither an orrery nor a map is intrinsically about what we use it to model, but needs to be interpreted as such. The same is true with computation.
The rest of your post unfortunately doesn’t really have any connection to anything I’ve written so far, so I won’t reply to it for now, for fear of introducing yet more confusion in the attempt of explaining myself. Really, I implore you, if any of this is still unclear, go back to my example. If you don’t understand something, ask. But don’t just go on attacking points I never made.
Well, as I put it earlier:
Computationalism then is the idea that the way the brain gives rise to a mind is by implementing the right sort of computation.