Downloading Your Consciousness Just Before Death.

Have you read that Stanford Encyc of Philosophy page on connecting computations to physical systems?

It might help you to understand the argument that HMHW has presented.

I wrote an extensive explanation in #378, giving an example of how this happens in Watson, the objection that someone like John Searle might have to this example, and my refutation of that objection. Did you not understand any of it? I kind of feel like I wasted my time.

You chose an example that was, in fact, trivial. One would make the equally trivial inference that the semantics here are that the bytes represent numbers. That’s all. There is nothing more to infer. If there is actually a large program doing something complex of which this is a snippet, one would have to see the program or know what it was doing because, as I implied in that definition, the semantics are not properties of the symbols themselves, but arise from the operations performed on them, such as producing a digital image or creating sound. Again, if this is not clear, the Watson example should clarify it.

Do you mean this 16000-word page, which only has variants of the word “interpret” in it 16 times and which ends with “In conclusion, the candidate hypercomputers proposed so far have not been shown to be physically constructible and reliable. For the time being, Modest Physical CTT remains plausible. It may well be that for all practical purposes, any function that is physically computable is Turing-computable.”?

Nope, I haven’t. The thing is, though, that I don’t need to read it to understand HMHW’s argument. His argument is quite self-contained, and to be entirely fair he’s actually managed to explain it pretty clearly, once I got over my inherent disbelief that anybody could argue anything so obviously wrong. For example, he recently said:

He’s stating quite clearly that under his personal definition of the term “computationalism” (which I seriously doubt matches anything on that page), the term is somehow presumed to require computational systems to produce outputs that cannot be variably interpreted. Nothing produces outputs that cannot be variably interpreted, so by his dumbass definition of the term, computational systems literally do not exist. It’s not a matter of them being unable to sustain minds; it a matter of them not existing at all. (Which, admittedly, means they can’t sustain minds.)

I have no doubt that he cobbled together his argument based on reading philosophers. But his argument, as presented, is crazy, and most certainly fallacious. And it also, as presented, includes presumptions on the requirements needed to produce minds that are so restrictive that human brains are most certainly excluded too - after all, nothing can produce output that can’t be variably interpreted, so if that’s the criteria that excludes so-called computational systems, it also excludes brains too. Thus all the special pleading and confused attempts to differentiate between the physical matter in colons and the physical matter in computing boxes and such.

His argument is only confusing in that it’s confused. It doesn’t have subtleties that I’m missing; it’s just broken.

I chose that very simple example for a very specific reason, because it is easy to be concrete.

You chose to not answer that question and instead you provided a description about a different system that we don’t have access to the internals, which makes it pretty tough to explicitly show how syntactic operations on symbols give rise to meaning.

Can we use my simple concrete example and walk through it?

If the answer is “no because it requires more than 2 bytes to have meaning” then great, at least we have a concrete answer and can then move up to the minimal system in which syntactic operations do give rise to meaning.

Yes, on purpose.

Questions:
Is there a lower limit to the complexity of a program below which you can’t assign meaning based on syntactic operations?

Does the intended purpose of the program play any part in the meaning? In other words, if my program was intended to be tracking the location of two virtual vehicles in an extremely simple simulation, is that meaning at all possible to infer from the syntactic operations on the symbols?

Are you able to provide an example of a simple program in which you are able to assign meaning based on the syntactic operations performed on the symbols?

It’s actually pretty interesting and informative. A lot of pretty smart people have been working on these problems for a long time with no good answer to many of the questions.

Here’s part of one page that gets at these very same issues:
“One of the most difficult questions for proponents of CTM is how to determine whether a given physical system is an implementation of a formal computation. Note that computer science does not offer any theory of implementation, and the intuitive view that one can decide whether a system implements a computation by finding a one-to-one correspondence between physical states and the states of a computation may lead to serious problems.”

So you’re saying that begbert2, internet poster, has it all figured out and all of the smart philosophers and academics over the last thousand years just weren’t up to the challenge?

Is it even the slightest bit possible that these smart people have raised some questions that are really pretty tough to answer?

Well, thing is that if we are talking about being humble then trying to employ a philosophical dictionary to deal with scientific issues is underwhelming.

I chose to answer the question as precisely and completely as I could with the example I gave. It’s hard to help you further if you didn’t understand it.

I don’t know what kind of gotcha you’re trying to create here, but it’s not working.

The answers are no, irrelevant, and yes, respectively.

On the first point, I could equally ask, is there a lower limit to the complexity of a program that can be considered intelligent? The answer is the same: no, but extremely simple programs are going to exhibit trivial semantics and trivial levels of computational skill that most people would (arbitrarily) not regard as intelligence at all.

On the second point, maybe, provided that said operations are sufficiently well described, but the question has never been about being able to backwards-infer the semantics, but rather the foundational premise that the semantics do actually exist (again, the Watson example).

On the third point, sure. About the simplest program I can imagine is one that adds two numbers together. That program is applying numerical semantics to the two representations, as opposed, say, to alphanumeric, visual, or musical semantics. Which is a pretty trivial answer, but then, it’s a pretty trivial program.

I haven’t been closely following the begbert2 vs HMHW altercations due to the length of the posts. But what I can say is that I personally know a number of “these smart people”, people who have been named and cited in the Stanford Encyclopedia of Philosophy pieces and throughout the published literature on computation and cognition, and they would be utterly contemptuous of HMHW’s view that CTM is impossible which he justifies by a frivolous interpretation of computationalism that he believes leads to a silly homunculus fallacy. I respect his intelligence and general knowledge, not to mention his patience in debate, but in this one area he is off in fringe-lunatic la-la land, apparently attributing human cognition to some kind of spiritual magic.

Because I use the box to uniquely compute f. So, once more, seeing how you once again overlooked the salient part of the argument: I can uniquely use the box to compute f. If what the system composed of box and me is doing to do so is entirely computational, then there must be a ‘box’ capable of computing only f. Else, if there is something non-computational that singles out the computation of f, computationalism is wrong. Hence, if computationalism is right, f (or any given computation) must be capable of being implemented uniquely on some physical system.

Your table is just as much a mathematical function as the ones I’ve presented. I can’t fathom why you would think otherwise. Associating your table to the system requires exactly the same act of interpretation as associating either of my functions to it does. And as I have shown, this association isn’t unique.

Of course you can, just as you can wake up one morning and find that all your books now say something different, given sufficient alteration of your language processing facilities. It’s the same thing.

And again, there simply are no 1s and 0s in a physical system.

Sounds interesting, could you provide a link?

You haven’t really addressed the issue about the different tables associated to the system (my M2 - M4). These are associated to the system in exactly the same way as yours is, but represent different computable functions; hence, under these different associations, the box performs different computations. Since you steadfastly refuse to give your reasoning, I can only assume it’s one of the following positions:

[ol]
[li]Your table is the ‘right’ one, these others are just erroneous.[/li][li]All of these tables apply, but they are—despite being different computable functions—the ‘same’ computation.[/li][li]The tables are just different ways of representing the behavior of the box, which is the ‘computation’.[/li][/ol]

Now, 1. is simply question-begging, and I don’t believe you hold to it. 2. flies in the face of the theory of computation. 3. trivializes computationalism, and has it collapse to logical behaviorism, as it identifies computation with the mere behavior of the box.

So I can’t imagine any position on which the continued reference to your table actually does something to save computationalism; yet, you continue to act as if it did. I would very much like to see some clarification on this subject.

The SEP article is also explicit in noting that one of the main challenges the semantic account faces is precisely how the symbols do get their content—in other words, the question we’re discussing here at the moment.

The notion of symbol of symbol in the semantic view of computation is antithetical to your notion that the symbols themselves bear ‘no relationship’ to the thing they represent; on the contrary, on the semantic view of computation, the fact that these symbols are representational is what makes them symbols, and what makes manipulating them a computation.

You’re once again trying to have your cake and eat it. You recognize that symbols simply having meaning is problematic (at least, to somebody with naturalistic commitments), and so, you propose that the symbols don’t have any meaning; but, you also recognize that a mere manipulation of syntactical elements won’t actually get you the notion of computation you require, so that puts you in a double bind. And what do you do? Once more, appeal to the magic of emergence: somehow, if you pile up enough syntactic complexity, semantics are just going to spark up.

I’ve given a general proof that no such emergence of semantics can take place. If semantics comes around, it should do so already in my box example. For convenience, here’s the argument again:

So, more computation doesn’t help. If there is any semantic competence present, it needs to enter on the lowest rung of the ladder, as any further step fails to add it (should it not be already present). Hence, in order to make an argument worth considering, you need to be able to make it at the level of the transparently simple systems I have been discussing, and not merely shroud it behind layers of obscuring complexity.

And of course, your Watson example doesn’t establish anything: it would be possible, in theory (although obviously not in practice), to realize Watson as nothing but a gigantic lookup table that just fetches each reply upon being given a prompt. That is, a system equaling Watson’t performance, but having no semantic competence at all, is conceivable; hence, Watson’s performance does not suffice to conclude that Watson has any semantic competence.

As always, I am left genuinely mystified by what sort of thought process could lead you to believing that. It’s perfectly simple for two observers to have different interpretations of the same system, since, after all, such an interpretation isn’t an objective property of the system, and how a system is interpreted has no consequences for the system itself. (I have even pointed out that two observers can use the same system to perform two different computations at the same time.)

Sentience and self-awareness are two very different things, of course. But that’s a minor point.

How do you still not get that? The idea that a system could self-interpret itself into interpreting itself is flatly inconsistent. Let’s go through the argument once more.

[ol]
[li]In order for a physical system P to implement a computation C, it needs to be interpreted by means of an interpretation I.[/li][li]The interpretation I is thus prior to the computation C: without interpretation, no computation.[/li][li]Let now C be a computation which, at least as a part of its execution, realizes the interpretation I.[/li][li]Thus, P must implement C in order to realize I.[/li][li]C must thus be prior to I. [/li][li]Consequently, in order for P to interpret itself, C must be prior to I, and I must be prior to C—which is a contradiction.[/li][/ol]

The only way out of this is to deny the premise 1. You can’t both hold that computation requires interpretation, and that a system could interpret itself to give rise to that computation. Premise 1 is, of course, what my argument establishes. So, the only way out of this is to attack my argument directly—say, by showing (rather than claiming) how a computer could interpret another, or how a box can be constructed that computes f exclusively.

Genetic fallacy: attacking the source of the argument to avoid having to deal with the argument itself.

OK. This is exactly the example I gave earlier (my function f), where I showed that its semantics aren’t fixed by the symbol manipulation, but can be freely interpreted differently. Combined with the fact that more computation can’t help, this then shows that computation is insufficient for creating minds.

You have a seriously skewed view of the field. Even the SEP article acknowledges that how the representations in semantic computation acquire their meaning is a major open problem for the approach; and, as the article on the CTM notes:

So no, it’s not just little old me that thinks this is an important open question, and to claim so is just to seriously misrepresent the state of the field.

It’s funny how people only ever think you’re saying something intelligent if what you’re saying is something they already believe anyway (or at least, doesn’t go against what they believe).

I use ‘computationalism’ in the same sense as it’s used on the SEP page—the mind is the software of the brain. That is, a certain physical system (a brain) carries out a computation (implements a computational—partial recursive—function), as a result of which, it generates a mind.

As that page notes, the Hinckfuss’ pail-type arguments are proposed as counterexample to this notion. My argument is of exactly that type.

It’s too simplistic to state that my argument simply interprets outputs. Inputs, outputs, and anything in between needs to be properly interpreted—that is, every symbolic vehicle used by the system needs to be associated with its proper content. Fixing the interpretation of inputs and outputs fixes the interpretation of intermediary symbols: fixing the meaning of ‘light on’ to be ‘1’ fixes the particular voltage signal transmitted by closing the switch that lights it up to likewise mean ‘1’. That this then changes the computation is simply due to the fact that a computation—again, a partial recursive function—is nothing but a mapping from inputs to outputs.

Again, you’re missing the very simple fact that, since I can, after all, uniquely compute f using the box, then, if computationalism were right, there must be a ‘box’ that uniquely computes f. Since if computationalism is right, then the combined system of myself and the box must perform a computation; that computation computes f uniquely; hence, it must be possible to uniquely implement f. Otherwise, computationalism is straightforwardly false.

And of course, your general contention that everything would be interpretation-relative on such a notion is simply wrong: whether a system digests isn’t, for example. Again, adding some interpretation to the output of that digestion is of course possible, but doesn’t change anything about the digestion itself, as it’s not the business of digestion how it is interpreted, while it’s exactly the business of computation how the symbols it employs are interpreted.

We seem to both be treading the same ground repeatedly, so I want to focus here just on what I believe are the key issues of disagreement. If you feel that I’m ignoring something salient, feel free to point it out.

No, this is entirely wrong on several levels. First, our tables are fundamentally different. Let me try to more clearly explain why. My table specifies the necessary mapping to transform an abstract machine into a physical one, and nothing more. It specifies assignments of what in computational theory are sometimes called internal semantics, which is a foundational distinction because internal semantics are primitives of a computing machine’s physical architecture. Switch positions are thus primitives of a physical architecture in the same way that voltage levels relate to 1s and 0s. Internal semantics exist entirely within the boundaries of the machine, in distinct contrast to external semantics which relate to representations of things in the outside world, and are commonly just called “semantics”. It’s only external semantics that may require an observer and are therefore subject to arbitrary interpretations, and your function tables are clearly in that category.

Second, your view, as previously noted, results not just in arbitrary interpretations but in fact in an infinite number of equally valid descriptions of the computation. This leads to absurdities like the pancomputationalism of John Searle’s wall, where he claims that even a wall implements computations because in this observer-relative perspective some pattern of molecular movements can be observed to be isomorphic with some imagined computation. But actual scientists – whether computer scientists or cognitive scientists, or engineers or programmers – for whom computation is everyday empirical work, would regard this as incoherent nonsense. Searle, of course, is also the author of the Chinese Room argument, which purports to show that symbol manipulation cannot lead to “real” intelligence, which is a Dreyfus-like level of misconception.

I’ve already addressed this and it’s hard to know how I can make it any clearer. You’re muddling two completely different concepts here. Concept #1 is that my state table is a unique and complete specification of the computation in question. Concept #2 is that we require a specification of the physical architecture primitives – the internal semantics as noted above – in order to build a physical machine. If your alternate table is different from mine, then it’s either wrong or it isomorphically maps to correspondingly different internal semantics – that is, it’s exactly the same table applied to a differently specified machine, which when built will perform exactly the same computation. It’s analogous to the same abstract machine being specified in different languages.

And as noted above, your “proof” is wrong. Moreover, one can rephrase this into a disclaimer of artificial intelligence: if computation leads to intelligent behavior like playing grandmaster level chess or winning at Jeopardy, and the box is doing computation, then intelligence should be visible in the box’s switches and lights. Where is it? Clearly not there; ergo, artificial intelligence is impossible!

I don’t care whether one takes the view that the semantics (or intelligence) are there in some elemental form that’s too primitive to recognize, or that they emerge (or at least become recognizable to us) in complex systems. It amounts to the same thing.

Someone has a seriously skewed view of the field, but it’s not me, nor the people working in it that I know. I don’t deny that there are contentious issues and that CTM has its detractors, but it remains the main working hypothesis of cognitive science today, and as Fodor has said, it’s hard to imagine cognitive science without it. And as I cited earlier:
The paradigm … of machine computation, and its influence upon the study of mind has already been both deep and far-reaching. A significant number of philosophers, psychologists, linguists, neuroscientists, and other professionals engaged in the study of cognition now proceed upon the assumption that cognitive processes are in some sense computational processes; and those philosophers, psychologists, and other researchers who do not proceed upon this assumption nonetheless acknowledge that computational theories are now in the mainstream of their disciplines.
https://publishing.cdlib.org/ucpressebooks/view?docId=ft509nb368&chunk.id=d0e360&toc.depth=1&toc.id=d0e360&brand=ucpress
So the “seriously skewed” view here is that you alone have figured out that an entire scientific discipline is based on a complete fallacy, and you’ve deduced this from a bit of philosophical sophistry while sitting in your armchair. Isn’t it a bit more reasonable to simply say that there are competing paradigms (although most of them seem to come from philosophers like Dreyfus and Searle rather than empirical practitioners), but CTM and its variants nevertheless remain mainstream and foundational to the field?

Nope, that was actually pointing to another poster that most researchers are humble about how hard this is, and how in reality the opposite is going on here, when one uses philosophical ponderings as being definitive. As if that will make it as good as what is going on in more recent research.

I have been busy, but I’m still looking at the papers (the non philosofical ones you claimed to support what you say in the thread) and so far they do not look as definitive as you want them to be, or to be contradictory to what I said before (I did notice that researchers have cited both your linked papers or researchers and cited Hawkins as support for what they are doing, pointing at less reliance on philosofical ponderings) I almost gave up when noticing that your first cite was actually an opinion piece. (Not the Philosopy dictionary).

OK, then let me start with doing just that—yet again—without having any real hope that you will now stoop to address it. The salient bit you missed—yet again—despite my repeated attempts at getting you, very explicitly, to address it, is the following:

I can use my box to implement computation f uniquely. In doing so, I either do something computable, or not. If I do something computable, then a system should be capable of performing just that computation—that is, implement f uniquely. If I’m doing something else, computationalism is dead. So either, it’s possible to implement f uniquely, or computationalism is dead. Since it’s not possible to implement f uniquely, computationalism is dead.

If you reply to just one thing in this post, then please make it the above.

As does my function f, of course, the machine being an adder of two-bit binary numbers.

Any computation whatsoever that a system could implement is in that category. And of course, what you’re saying here doesn’t matter one whit: if you intend for this distinction to hold any water, you’d need to show how my function f can be implemented in this way, so as to be the unique computation associated to a system.

Just because you don’t like a conclusion, doesn’t make it absurd.

I’m confused, are actual scientists also required to be true Scotsmen, or is that optional?

And of course, because Searle held one (as you claim) wrong opinion, all of his opinions can be discarded. And as noted, and repeatedly highlighted by me, many of Dreyfus’ ‘misconceptions’ are, by now, established dogma in the field of AI.

You keep sidestepping the point, though. If your table is the unique and complete specification, then what are the other ones (M2 - M4) I proposed? They’re connected to the system in exactly the same way as your table is; yet, they’re clearly distinct. You can either hold that I’ve merely used different names for what should actually just be ‘switch up’ and ‘lamp on’; but in that case, your ‘computation’ is just the behavior of the system, leading again to the collapse of computationalism.

The simple fact is, none of this matters: if I present you with that system, you could evaluate switch flips and lamp lights using your table, which would realize a function from binary values to binary values; all the while I can evaluate it using one of mine, which will yield a different function from binary values to binary values. In no way does the system just intrinsically relate to these binary values; the association will always come down to an arbitrary choice.

And again, if this sort of thing were possible, then it would be possible to implement f in the same way; thus proving that nothing is gained by talking about your tables.

This is simply false. Intelligence can be described in terms of behavior; the notion that consciousness can be likewise described—behaviorism—is basically an abandoned idea (for good reasons). The behavior of intelligence—lighting up certain lamps if certain switches are flipped—is already present in my box, if only, of course, in a very limited sense. This is different only in scale from making a certain chess move if a given chess configuration is presented.

Look, this is explicitly contradicted in the Stanford Encyclopedia article. As it says there:

Note the use of the past tense.

So no, this isn’t just me having an ax to grind against computationalism. It’s simply the wide-spread perception of the field from within.

I’ve nowhere claimed to be the only one to have figured this out. I’ve been presenting a widespread doubt about the computational theory of mind, which continues to be a very active area of research. Your own cite from the IEP calls it ‘one of the most difficult questions for proponents of CTM’. Pretending I’m just some fringe lunatic ranting away on the internet about some inconsequential issue won’t make that go away.

Interesting that you notice the use of past tence when in reality it was just pressure to just get the orthodox to admit that this is harder than what the orthodox thought it was. It does not follow then that doubts about CMT means that one can dismiss the whole thing.

And before the usual complaint, I will mention that that was cited only to point out that CMT has not been dismissed where it counts, in academia and actual research as a machine learning graduate going for a doctorate points out there.

Interesting company, some good papers there to browse as well. Carry on gents/gals

I’m not trying to create any kind of gotcha.

I’m trying to understand how the syntactical operations that are performed on symbols establish the relationship to meaningful things.

When I write programs, I know that I am modeling them after something in my head I am trying to accomplish, and I create data structures and transformations that support my goals, but it seems that the relationship to “meaningful things” is inside my head not inside the program.

So I am trying to walk through simple examples where we have complete access to the symbols and the transformations (unlike Watson) and I figured you could show how the syntactic operations performed on the symbols establish the relationship to meaningful things.

I just see numbers so it’s not clear to me how to get from A to B.

Ok, so let’s continue walking through my car simulation example. Right now it’s about as trivial as you can get, but we can expand it as we need to better illustrate how this works.

Questions:
You say “meaningful things” but you also say that the intent of the program isn’t really a factor, so for my car simulation program, the “meaningful things” are not “virtual cars” or “simulated cars”, they must be something else.

Can you help me understand what are the “meaningful things” that my simple car simulation symbols+syntactic operations establish?

If we added more complexity to my program, would it be easier to show what the “meaningful things” are and how they are established?

As my car simulation grows in complexity, will the meaningful things ever become “virtual cars”?

While I cited their research before, I would say that I almost reported that as spam. You have to be careful when not making relevant quotes, at least a link with a specific cite would be helpful please.

I asked and never really received a clear answer why I have to build a machine that computes only f uniquely. To me this whole argument is incoherent. You can uniquely compute f or any of an infinite number of other functions because you choose to make those interpretations. The box doesn’t care, and the question of whether it’s computing f uniquely or not is not a question for computer science, but a rather empty philosophical one. As Piccinini notes in the paper I cited below, computation can be defined in increasingly restrictive terms, from an open-ended mapping account that is essentially trivial pancomputationalism to putting restrictions on what mappings are allowed, to the semantic account such that only processes that manipulate representations are considered computations, and so on. So getting the box to compute f uniquely is not a matter of how to build the box but a matter of how to define computation in the right way to make it so, which isn’t a question I even find interesting.

I’m not sidestepping anything, this is all addressed in the part that you failed to quote. If we keep talking past each other like this I just won’t participate. Again, your “other types” of table either specify a different machine with different behavior (i…e- they are wrong) or else they are associated with a machine having an exactly correspondingly different architectural specification (internal semantics). In the latter case, using your “different” tables result in exactly the same machine. So this is just another philosophical sleight-of-hand, the kind you seem to enjoy.

To be clear, as abstractions the different tables are indeed different computations, but that’s because they’re implicitly being compared on equal terms. If you want to quote that back at me don’t forget the bolded part! But as templates for a physical machine, they can only be validly compared when applied to identical physical architectures, because different physical architectures change the meaning of the symbols. Your “different” tables are manifestly the same because in order to be correct they must all lead to the same behavior. This is not a particularly interesting point and I don’t know why you keep harping on it. Once we have a consistent reference point, there is always only one table that defines the computation.

I know it’s false, since obviously AI is possible. Whether one wants to take the view that intelligence is present in some primordial form in that box or whether it emerges as a new quality in a suitably configured system is not something I’m particularly interested in debating right now – and I think both views are valid – but suffice it to say that if you take the former view, then I can equally say that semantics are also present, and they derive from the specification (the internal semantics) of the machine.

First of all, “com[ing] under pressure from various rival paradigms” doesn’t mean it’s no longer central and foundational, and secondly, there’s an obvious bias in that view which isn’t reflected in the IEP version which begins with the opposite view that I quoted. There’s also the interesting paper by Piccinini, Computationalism in the Philosophy of Mind (2009) , which states in the abstract that “Computationalism has been the mainstream view of cognition for decades. There are periodic reports of its demise, but they are greatly exaggerated.” Finally, there’s the fact that everyone I know working in cognitive science is a strong proponent of CTM. I don’t associate with nitwit philosophers like Searle.

I don’t recall ever saying that the intent of the program isn’t a factor, and in fact the intent is rather central. What I said was that inferring semantics just from observing a series of bytes and their state changes isn’t in general realistic or even possible, but that doesn’t mean the semantics don’t exist.

I think you’re making this out to be more complex than it really is. If you have something like a traffic simulation then the “meaningful things” will be things like roadways, stop lights, and of course cars and maybe weather. In the computer they’re all just bytes like any others. The operation of the simulator is what endows them with semantics. A more complex simulation doesn’t change this fundamental paradigm; what it might do is just increase the richness of the environment by adding more objects or adding more detail to the objects. But maybe I’m not understanding what your question is.

Exactly. And if I do so computationally, then there exists a computation that makes that choice (in just the same way I do), and hence, a way to uniquely single out f by means of a computation, and consequently, a system that uniquely computes f. If you say that I can make a choice, and you say that all I do boils down to computation, then you’re saying that a computer can make that choice; and hence, you’re saying that f can be computed uniquely, because I can compute f uniquely.

It’s the only question that matters, though, because f is arbitrary, and hence, the same question needs to be answered if you want to claim that a computer instantiates a mind.

I genuinely don’t understand this. My tables M2 - M4 are manifestly different from your M1, and yet, associated to the system in exactly the same way, and thus, have equal claim to being ‘the’ computation performed by the system. Either you’re saying that they’re all the same, in which case you’re saying 1 = 0, and I’ll be charitable enough to believe you’re not actually saying that. Or, you’re saying that where I write ‘1’, you write ‘0’, but we’re both meaning the same thing—say, ‘switch up’. But in that case, all that survives from the computation is merely the behavior, because all of those tables are just different ways of writing this one:


      S11 S12 | S21 S22 || L1 L2 L3
      -----------------------------
       d   d  |  d   d  || x  x  x
       d   d  |  d   u  || x  x  o        
       d   d  |  u   d  || x  o  x        
       d   d  |  u   u  || x  o  o        
       d   u  |  d   d  || x  x  o        
       d   u  |  d   u  || x  o  x        
       d   u  |  u   d  || x  o  o        
       d   u  |  u   u  || o  x  x        
       u   d  |  d   d  || x  o  x        
       u   d  |  d   u  || x  o  o        
       u   d  |  u   d  || o  x  x        
       u   d  |  u   u  || o  x  o        
       u   u  |  d   d  || x  o  o        
       u   u  |  d   u  || o  x  x        
       u   u  |  u   d  || o  x  o        
       u   u  |  u   u  || o  o  x        


With the only difference being that ‘u’, ‘d’, ‘x’ and ‘o’ are ‘called’ variously ‘1’ and ‘0’. But then, once more, you’ve trivialized the notion of computation, and your computationalism just collapses to logical behaviorism.

I don’t see any other option. Either, the tables M1 - M4 are different computations—different mathematical functions implemented by the same system, due to variously interpreting switch and lamp states as logical states. Then, there is no unique computation associated to a given system, and computationalism collapses. Or, they’re the same thing, just called differently. Then, they’re all equivalent to merely the behavior of the system, and computationalism collapses. They can’t be both the same, and not the same, but, as best I can tell, that’s what your position requires.

Perhaps you could help me by telling me what, in your opinion, goes on if I interpret the box as computing f; that is, if I use it to compute the sum of two numbers. Does what I do in that case reduce to a computation? If so, then why isn’t there a computation that simply computes f? What happens to connect your table to f, and how does that work in everyday situations—say, using your pocket calculator to make a calculation? How does the pocket calculator relate to the computation of sums, and why does my box not relate to that computation in precisely the same way? Because to me, the situations seem equivalent: both produce symbols, that I then interpret as numbers.

Of course, as they must be, since they’re proposed computations associated to the same physical system, and thus, stand on entirely equivalent footing.

OK, that seems to be a salient point. Could you give an example/an illustration of how the physical architecture influences the meaning of a symbol? And what do you mean by identical physical architectures—I explicitly only consider the same physical system at all times?

OK, so then you’re explicitly saying that computation is individuated only by behavior. This entails that computationalism is trivial, since it collapses to behaviorism (which is typically considered untenable). In order to have any non-trivial content, computations must be something above and beyond behavior—which, I should say, is rather obvious, since no physical system has, for instance, addition or logical functions among its behaviors. Thus, to the extent that computations involve abstract particulars, computation must go beyond behavior.

No, you can’t, because then you’d have to show how semantics boils down to behavior; otherwise, you’ve merely made (another) unsubstantiated claim. Intelligence, on the other hand, straightforwardly reduces to behavior: what acts intelligently, is intelligent. But, as the example with the lookup table shows, the same thing is simply false for semantic competence: even though a lookup table machine acts as if it has semantic competence, it doesn’t.

So no, that just doesn’t work at all.

That’s why I said, ‘note the past tense’: “CTM played a central role”. But anyway, my sole point is that the problem I’m pointing out is a serious one that has generated much debate over the years, and isn’t something anybody with even a rudimentary awareness of the state of the field is going to dismiss as silly.

Yet, as Piccini himself acknowledges (since he’s the one who wrote the SEP article on computation in physical systems):

That’s just low. Again, I’m struck by the fact that both you and begbert2 seem to feel the need to continually resort to personal attacks of this sort; I guess if one doesn’t have any arguments, invective and insult must do.

Uh? Are you Searle? You keep using that line about personal attacks, I do not think it means what you think it means.