](personal attack - Wiktionary, the free dictionary)
If it implies specifically that only the person against whom the attack is directed is entitled to make that claim, then I withdraw it. But the general gist of it—that merely insulting one of the most well-regarded thinkers of our time isn’t a good replacement for bringing actual arguments—still stands.
There seemed to me to be some redundancy where the “unique computation of f” stuff occurred multiple times, and the table argument multiple times, so I’ve omitted those.
OK, with that, and having looked back at your clarification on this point earlier, I’ll come at it from a different angle.
There is a great deal of vagueness around the concept of “you uniquely compute f”. What exactly are you saying, in actual mechanistic terms? Let’s suppose that what it means is that you have a list of two-bit number pairs, or their decimal equivalents, and you operate the machine to obtain their sums, reading the lights and writing down the answer in another column. Having in mind that specific interpretation of f, you are untroubled by the infinite number of other interpretations that might be applied.
Please explain why I could not build a computer to do exactly the same thing, or just program a computer with a simulation of the box and your desired f interpretation. How is this not a system that uniquely computes f? Furthermore, how is this not now a system that manifestly implements the semantics of f?
On the last point, by “identical physical architecture” I mean identical internal semantics; that is, exactly the same specification of what switch position means 1 and 0, and what bit value light on or off represents.
The physical architecture influences the meaning of the symbols in the I/O table I proposed in that reversing the interpretation flips the value.
Isn’t that just what a Turing machine does? Consider for a moment a universal Turing machine (UTM). Quoting the venerable SEP,
… a UTM is a Turing machine that can mimic any other Turing machine. One provides the UTM with a symbolic input that codes the machine table for Turing machine M. The UTM replicates M’s behavior, executing instructions enshrined by M’s machine table. In that sense, the UTM is a programmable general purpose computer. https://plato.stanford.edu/entries/computational-mind/
So the UTM can be made equivalent to any given Turing machine by copying its state table and replicating its behavior. Has this behaviorist description trivialized computationalism?
My answer would be no, because the “abstract particulars” arise, as I keep saying, from the semantics imputed to symbols by the computational processes.
Piccinini here, as I read it, is not being inconsistent with his other statement about the mainstream position of CTM in cognitive science, but acknowledging that it has challenges and issues. And most cognitive scientists neither claim that the brain is exactly like a digital computer nor that CTM is a full explanation for all of cognition. It’s just a very important framework for explaining a lot of it. Piccinini himself seems to be a significant supporter of CTM, though he believes that at the physical level neural computing is quite different than digital computing, but the same symbolic-representational paradigm that is central to CTM can manifest.
Meh, I have little patience for philosophers who antagonize real empirical researchers and usually turn out to be wrong.
Recall our discussion about Watson, and how the system exhibits apparently real understanding of the sentence that constitutes the Jeopardy clue, thus clearly extracting high-level semantics from it the way a person would. This is both a demonstration of semantic content in a high-level linguistic sense and most would certainly say a demonstration of apparent intelligence.
But you said:
So what happened to “what acts intelligently, is intelligent”, which you just said above?
There’s nothing about lookup tables that precludes their role in driving intelligent behavior. We tend not to think of it that way because in practice (not in theory) they tend not to be scalable to complex tasks, but ultimately how a computation is implemented has no bearing on how we judge its intelligence or competence; it needs to be judged purely on results (behavior). This is exactly the same as the “it’s only symbol processing” fallacy about AI that Searle and other skeptics have advanced.
Your short answer was “irrelevant” which I assumed you meant applied to the primary question in that 2nd paragraph which was “Does the intended purpose of the program play any part in the meaning?”.
Again, we are talking about making this notion concrete about how syntactic operations on symbols establishes the relationship to meaningful things.
So, is it just the syntactic operations on symbols that establish the relationship to meaningful things or is the original intent of the creator required to establish that relationship to meaningful things.
if you ignore my intent when I wrote the two byte car simulation, is it still possible to establish the relationship to meaningful things by the syntactic operations on the symbols in that program?
If so WHICH meaningful things? Are they cars? Something else?
Do I need to add more bytes to the tracking of the position of the car and also to the shape of the car, and to the transformations that simulate movement to be able to establish a relationship to meaningful things like a car?
It seems like I could keep adding bytes and transformations, but I don’t see where it crosses the line to establishing a relationship to meaningful things.
I’m not sure that there’s a correlation between how hard it is to determine whether a machine is implementing a given computation, and how hard it is for a machine to implement a given computation. I mean, consider that organic machine called “the human brain”. We’re pretty confident that it generates consciousness, but heck if we know how it does it. However the fact that we’re unable to determine whether a given brain is implementing conscious is completely irrelevant to whether brains do it.
I’m not interacting with those smart people, I’m interacting with HMHW. And I’m not dealing with the complete breadth of HMHW beliefs and assertions; I’m only dealing with the ones that compose this argument and argumentation style.
These facts really limit the scope of what I have to understand here. I’m pretty confident that it’s within my power.
Hey, something that resembles a real formal argument! Excellent! Let’s talk about this.
I dispute line 2. Specifically, the curious insertion of the word “prior”. It injects a temporal condition that isn’t present in the premise it supposedly is derived from, and then (erroneously) tries to equate the time-dependency with a simple conditional with no temporal condition.
The first premise can be equivalently rewritten as a conditional statement:
P1: IsComputingAnything(x,f)) ⇒ ∃y IsInterpretingAs(y,x,f)
You second premise can’t be rewritten as anything that can follow from that, because conditionals don’t imply a time element, and neither does the prose verbiage of your first premise. The time element, the “prior”, is just sort of stuck in there with no valid source. And of course the “prior” is critical to your conclusion.
I actually agree with P1, by the way. When a program writes a 0 into memory, that 0 has no meaning or effect unless something not only references it, but also assumes it has a specific meaning. What specific meaning, which specific interpretation is used matters - a 0 could mean zero, false, black, success, end of sentence - or a trillion other possible options. A zero on its own is utterly meaningless - but a 0 is completely meaningful to anything that perceives it as having a specific meaning.
How so? Do you think I’m being vague when I say, I compute the sum of two numbers? How could I make that more clear? I punch in one number, then the other, and out pops the sum.
Well, but that’s what I’m saying! If I can uniquely compute f, then there ought to be a box that uniquely computes f, and thus, for which f has the status of the table you claim to be the ‘computation’ performed by my box. I just want to see what that box looks like, and, in particular, why I can’t then use it to compute something else, by using just the same technique as in my original version of the argument. That’s the challenge: present me with such a box, and I’ll shut up.
Programming a computer with a simulation of the box, of course, is just begging the question: because it only uniquely performs that computation, if it uniquely performs the simulation. But that’s what my argument calls into question.
Think about how that simulation would do its job. Do you imagine it showing kind of like a movie of switches being flipped, with added text that denotes ‘switch up’ means ‘1’, or even, with a direct translation of switch and lamp states into numbers? Maybe sequentially moving through the table I gave all those many posts ago for f?
Because that still requires interpretation in order to represent the right computation. When the simulation shows the box as having a particular set of switches flipped—say, without loss of generality, having the switch pattern (up, down, down, up)—and displays a certain set of numbers associated with that—(2, 1)—and as a result, a certain set of lights light up—(off, on, on)—and a further number is displayed—3—then, the simulation will only be of the box implementing f if the numerals are interpreted as relating to decimal numbers in the familiar way. But there’s nothing about the glyph—the sign, the symbol, the pattern of pixels—‘2’ that inherently ties it to the number 2. There is nothing that says I can’t take the numeral ‘2’ to denote the number 1, the numeral ‘1’ to denote the number 2, and the numeral ‘3’ to denote the number 6—numerals are arbitrary conventions, like all symbols—but if I do so, then, all of a sudden, the simulation intended to implement f will implement f’.
So you see, the system intended to manifestly implements the semantics of f ends up just as manifestly—or rather, interpretationally—implementing the semantics of f’.
The problem is that you never get concrete enough in your examples. You say, a simulation will just do that, without stopping to think about how it possibly could; if you do that, you will quickly realize that there is actually no way that doesn’t rely on some ultimate interpretation you’ve been silently presupposing.
That’s again something I don’t understand. The physical architecture doesn’t have any connection I can see to internal semantics; in particular, as my examples amply show, the physical architecture doesn’t in any way fix what’s a 0 and what’s a 1, for example.
To me, the physical architecture is just the hardware—the wires, switches, lamps, and whatnot. No interpretation is going to change anything about that; interpretation only comes in once you connect these physical elements to abstract objects, such as logical values.
No, of course not, because the UTM implements a different function from the TM it copies. If a TM implements a function f[sub]e/sub, then a UTM is a device that implements a function u(e,x) such that u(e,x) = f[sub]e/sub, that is, a function on two inputs where one essentially codes (say, via Gödel numbering) for a given TM, and the other gives it the input on which to evaluate TM’s action. (And again, note that I’m talking about TMs as abstract machines, not their putative physical realizations.)
That’s the problem, though. You just keep saying that; but the idea that computational processes imbue symbols with semantics is, at best, hugely controversial, and would need substantial argumentative support to take seriously.
Sure. But my point is that these are substantial, and indeed, possibly fatal to computationalism, which is in contradiction to your contention that they’re just silly old me farting about in, what was it, ‘fringe-lunatic la-la land’.
Why do you think something happened to it? Lookup-table Watson certainly acts intelligently (although one might want to quibble that its intelligence is basically canned, offloaded to whatever agency drew up the lookup table). The point is that this intelligent action does not imply anything about its semantically understanding the questions, as this action can be replicated without such understanding (since the lookup table certainly does not possess it). So pointing to Watson and saying, see, there’s how semantics emerges if you pile up enough computation, is simply begging the question, as Watson’s performance doesn’t suffice to determine that it actually possesses any semantic understanding.
What’s meant there is logical, not temporal, priority. That is, the interpretation is a necessary prerequisite for the existence of the computation; thus, if the computation is a necessary prerequisite for the interpretation, the whole thing just doesn’t get off the ground: you need the computation before you (read: in order to) have the interpretation, and you need the interpretation in order to have the computation.
Again, no time element is meant, but exactly the logical dependence given by the conditional.
Rather than trying to reply to your lengthy response line by line, let’s just focus on this because I think it takes us to the crux of the issue.
You “punch in” one number … how? Where’s the thing that you punch? The sum “pops out”. Pops out of what? There’s a reason I’m asking these questions because I want to focus on the specific actions that you consider to be so mystical as to be distinctly non-computational.
Let’s be even more specific than I was last time. What you do is you examine the first row of a list of problem statements on a piece of paper, consisting of two digits in the range of 0-3. You convert the digits to binary in your head and operate the switches to enter them, read the result in the lights, make the binary-to-decimal conversion, and write the result on the first row of a blank piece of paper. You then repeat the process for the second row. And so on.
Now, I could build a computer with the appropriate interfaces and actuators and OCR reader and printer to do exactly the same thing. There you go – a machine that uniquely implements your function f.
And before you start churning up philosophical obfuscations about interpretations of interpretations and infinite regress, understand this: the overall system – your box and my computer, is doing exactly the same thing as I just described you doing – it’s taking a sheet of paper with addition problems on it, using the box to perform binary addition according to the unique function f, and producing the answers on a blank sheet of paper.
So the inputs and outputs are precisely the same as the inputs and outputs that you would produce. They are, in theory, indistinguishable. I find it hard to fathom a rational credible argument that this system is not precisely the unique implementation of your function that you were asking for.
No, you misunderstand. The physical architecture is not the hardware at all. The physical architecture is meant to be the specification of the machine – what I’ve been calling the internal semantics. If you change the specification of what bit value switch up vs. switch down represents, you have to change the table accordingly in order to achieve the same result.
This is just self-contradictory. You were correct in your original statement that “what acts intelligently, is intelligent”. Now you appear to be trying to backtrack all over the place. The above is self-contradictory because a lookup table in theory can be a solution to any given problem domain fully equivalent to the actual Watson computational model or any other. It’s just impractically huge, and in reality could never be built. But in theory, and in general, given a problem domain, a simple lookup table can emulate the results of any computational algorithm, and therefore engender exactly the same behaviors.
“What acts intelligently, is intelligent”. Peeking under the covers and deciding that this is just a table lookup is exactly like deciding that it’s just “symbol processing”, as I said before. It doesn’t matter. This is the bullshit that philosophers like Dreyfus and Searle and all their ilk got dragged into believing. As Marvin Minsky used to say, “when you explain, you explain away”. Intelligence can only be judged on its objective behavioral merits, not by passing judgment on its underlying implementation. I stress this because the argument about semantics follows the same logic, and indeed as we see here the two are closely related.
There aren’t enough atoms in the universe to implement Watson as a lookup table. There are simply too many variants of language. Theoretically it could be done but not in this reality. Lookup tables tend to explode quickly and the universe isn’t that large.
And once again, you’ve only succeeded in a machine that implements f if its inputs and outputs are interpreted in one particular, arbitrary way. You give it, say, scraps of paper with numerals ‘0’,‘1’,‘2’, or ‘3’ printed on them; the machine then translates these numerals into switch flips—say, if it first received a scrap with the numeral ‘2’, it will flip S[sub]11[/sub] up; then, if it receives a scrap with the numeral ‘1’, it will flip S[sub]22[/sub] up.
As a result, the lamps L[sub]2[/sub] and L[sub]3[/sub] light up. Then, the computer will print out, in response, the numeral ‘3’. So now, that’s a system computing f uniquely, you say.
But you haven’t actually made one step forward here. The user of that total system still needs to interpret the numerals in a specific way, for it to implement f. If I have been taught a different set of digits, whose ‘2’ embodies the number 1—the multiplicative identity—, whereas the numeral ‘1’ denotes the number 2—the successor of the multiplicative identity—, and the numeral ‘3’ denotes the number 6—the successor of the successor of… you get the idea—then, to me, with an equal claim to being right, the machine you’ve constructed will instead have computed f’(1, 2) = 6.
Any proposed implementation will run into exactly this issue. But I have to thank you for finally at least trying to address this point, it helps me understand where we’re talking past each other.
The claim that it’s doing exactly the same thing is, of course, question-begging. Your apparatus merely manipulates symbols; I map symbols to meanings, I interpret them. When I flip switches S[sub]11[/sub] and S[sub]22[/sub], the resulting configuration means, to me, the pair of numbers—not numerals; not signs—(2, 1). This happens the same way as when I read ‘dog’ and think of a canine.
Now, this faculty of interpretation isn’t some magical capability (indeed, I’ve published a naturalistic theory on just how this sort of thing works, which would however take us a bit too far afield right now). But it is something that at least needs to be shown to be implementable by computer in order for computationalism to make a serious claim to explaining the mind, and the way one shows this is by exhibiting an apparatus that uniquely implements f, since that’s what my faculty of interpretation enables me to do. If that should turn out to be impossible—as the above arguments suggest—, then computationalism simply fails.
So is this ‘physical architecture’ just the table of how switch flips connect to lamp lights I presented in my last post, or is it not?
Nonsense. (And by the way, your ‘cite’ doesn’t bear you out, concluding that the two seem close but not identical; an example of the sort where there’s no temporal component, by the way, is logical supervenience.)
In a relation of the form ‘iff A, then B’, A is called the antecedent, or literally, ‘that which comes prior’ (B is called the consequent—‘that which follows’).
The point, however, is that the interpretation is a necessary prerequisite for the existence of the computation (you could somewhat sloppily consider it its cause, or that which generates the computation). If you now want to use that computation to produce the interpretation, then the computation is a necessary prerequisite for the existence of the interpretation. So, you have the same sort of circularity inherent in the omnipotent being creating itself—the interpretation creates the computation, which creates the interpretation.
I’m in the snippet you’re quoting explicitly affirming my earlier stance, so how you get from that that I’m ‘backtracking’ is rather mysterious to me.
Let’s step back a bit. You claimed that Watson’s semantic competence is proven by its Jeopardy-playing competence. This entails an argument that for this Jeopardy-playing competence, semantic competence is necessary. So I introduced a system—lookup-table Watson—that equals Watson (ex hypothesi) in its Jeopardy-playing competence, but entirely and transparently lacks semantic competence. Hence, semantic competence is not necessary for Jeopardy-playing competence; consequently, Watson’s performance in playing Jeopardy does not imply any semantic competence.
This is one of the main reasons why the collapse of computationalism to behaviorism your approach threatens would be fatal.
The opposite is true. As we see here, simple behavior does not tell us anything about semantic competence, while it tells us everything about Jeopardy-playing competence.
Sure. But that’s entirely irrelevant to the argument. I only need for the lookup table to be logically possible to show that it’s consistent for a system to have no semantic capabilities, while still showing the desired competence in playing Jeopardy (or chess, or filling out intelligence tests, and so on).
So, IOW, your position is that computation is not computation until a human observes the results! Putting aside the question of whether it’s still a computation if my dog observes it, one notes that your requirement to build a computing device that computes only f has been rather trivially beset with a condition that makes such a device obviously impossible by the intrinsic terms of the requirement itself.
You know, I used to enjoy philosophy back in college but I have little patience for what essentially amounts to a kind of philosophical dissembling, like asking someone how they would create a force so powerful that it can move an immovable object, or whether reality exists if it’s not being observed. My approach to philosophy, such as it is, leans heavily to pragmatism. In those terms I considered the problem domain here to be one where someone could make a logically coherent but unintended interpretation of the box’s output, and how one could constrain the interpretation to the intended one using only computational methods. I gave you that solution.
Again in the context of pragmatism, I see the “interpretation” or homunculus objection as having absolutely no value in the CTM discussion. Rightly or wrongly, CTM holds that mental representations have a syntactic structure analogous to sentences, and that cognition essentially consists of syntactic operations on these structures which endows them with meaning; for example, there is ample evidence (albeit controversial) that mental images are representational in this way rather than depictive.
Indeed the homunculus argument creates a paradox in the context of vision because it posits that the image that vision creates on the retina, in order for us to perceive it, must somehow be “seen” by a homunculus inside the brain, which of course leads to infinite regress. Yet vision is actually a thing, so there’s some basic problem with this argument, and the problem is that it tries to explain a phenomenon in terms of itself, and hence lacks any explanatory value at all. One can now draw an analogy with computer vision. Robots can see things and act on them, and not just in the lab – it’s becoming commonplace. So where is the homunculus?
To reiterate the point, as I said above, one can only conclude from your position that computation is not computation until a human observes the results. So where is the human (and all the purportedly necessary “interpretation”) when a computer observes a scene and takes specific autonomous actions based on what it sees? And regardless of your answer to that, the followon question is, in what way is this fundamentally different from what a human might do?
If it’s the table of explicit switch positions and light status, then yes.
Though of course we have to agree on what “d”, “u”, “x”, and “o” mean.
This I just have to forcefully reject as an obvious contradiction, and if you can’t bring yourself to concede the point, then we can just drop the discussion. Intelligence can only be determined as a set of behaviors; the underlying mechanism is of no consequence, and as you said, if it acts intelligently, it’s intelligent. You even seem to agree with all this. But a crucial part of the intelligent behavior in Watson is extracting the linguistic semantics from the natural language input. As a matter of fact considerable effort went into this part of the project. If you continue to deny that Watson has in fact computationally inferred semantics in the sentence and appropriately decomposed it – and bolster your denial with the trivial claim that the whole thing could just be a table lookup – a claim that obviously can be applied to any computation whatsoever – then let’s just drop it, because at this point you’re starting to sound rather disturbingly like John Searle.
It is not necessary for an entity, or a system, to be ‘conscious’ or ‘self-aware’ in order to be able to ‘choose’. Humans can select an interpretation, and thereby implement a computation; but natural selection is perfectly capable of making a choice in the same circumstances. If a dragonfly sees a prey species, flying past on a complex trajectory, that dragonfly can calculate where the prey will be in the future- this is a calculation it does using a system of processors which it has inherited, and which implement a single computation (call it f[sup]1[/sup] if you must). If by any chance another dragonfly implements f[sup]2[/sup] in the same set of circumstances, then that dragonfly will not intercept its prey, and may starve to death. Unless, of course, f[sup]2[/sup] is better at calculating the same problem - in which case, this interpretation maybe selected for, and the trait may be inherited. This is the great innovation of Darwin’s theory of Natural Selection- it allows selection to occur without recourse to a conscious mind. Humans, and gods, are not the only things which can make choices in an evolving world. It seems obvious to us now, but it was earth-shattering at the time.
For some reason **HalfManHalfWit **appears to have dismissed the idea of ‘computation plus evolution’ as ‘trivialising’- far from it, this combination is probably adequate to explain most, if not all mental processes. It doesn’t help us much with the problem of replication of consciousness, however, since the process of evolution has taken four billion years to get to this stage, and we ain’t going to unravel the details in a few decades.
No, not in the least. As I have been tirelessly reiterating, I consider a system to compute if it implements a given computable function; that is, once the symbolic vehicles it manipulates have acquired the appropriate semantics as given by the function as an abstract mathematical object.
Your system doesn’t do that; it merely manipulates symbols, syntactic tokens, which can be interpreted in different ways. The numeral ‘1’ is not identical to the number it refers to, which is demonstrated by the fact that it can refer to different numbers. Thus, a system producing that numeral does not in itself suffice to establish that it has any connection to the number. But this connection is needed to implement the function.
Consider a written word. It is not intrinsically connected to its meaning; only upon being read, upon being interpreted, does it acquire this meaning. This very same process is what I use to implement a given function using my box. It’s something a human mind evidently can do, as demonstrated by the fact that you’re understanding what I’m writing (to some extent). So, since it’s a capacity the human mind possesses, and since computationalism is the claim that everything a mind does is computational, computation must be able to reproduce this capacity in order to be viable. Hence, my challenge if implementing the function computationally, without any necessity for an interpreting mind.
This has nothing to do with me claiming that computation is only properly computation if a mind is involved; I merely point out that when a mind is involved, a certain thing happens (namely, a function is unequivocally computed), and thus, every proposed explanation of the capabilities of mind will need to account for that.
Great! Then you’ll be familiar with Charles Sanders Peirce’s approach to semiotics. In his terms, what I’m pointing out is nothing but the fact that the signifier does not imply the signified.
Well, the thing is, human minds evidently have an interpretive capacity, which thus a computationalist account must be capable of explaining. But to do so, the homunculus problem needs to be defused.
And indeed, there lies the salient analogy. The way we use systems to compute can’t be explained by computation, since, as you say, ‘it tries to explain a phenomenon in terms of itself, and hence lacks any explanatory value at all’.
There is no need for a homunculus, since the robot never interprets what is being seen. Take a simple photodiode, that transmits voltage in proportion to the light that reaches it. We could use it to create a simplistic vision system that, say, moves towards the light. There is no homunculus because it’s merely the light inpinging on the diode that physically causes a certain current to flow, and a certain motor to activate. All computer vision works along those lines.
As for humans, as noted, I do actually have some idea here, but it would take us too long to work through that. But of course, I’m under no obligation to provide a better solution to be allowed to point out that yours doesn’t work.
Fine, but then, you should quit claiming to be a computationalist, and call yourself a behaviorist instead.
I’m literally unable to concede a point you haven’t made. Simply rejecting a conclusion—no matter how forcefully—just doesn’t constitute an argument.
This is, once more, just a baseless claim; moreover, as I have shown, it is simply false, as Watson’s intelligent behavior can be duplicated without extracting linguistic semantics from the language input.
As I have already pointed out to you, natural selection only works on the level of behavior, and thus, can’t select for computational interpretations. Unless, of course, you also want to consider the behavior to be the computation; but then, that’s also just behaviorism, and widely (nearly unanimously, in fact) considered refuted.
Plus, if you do want to make that claim, you also need to provide an account regarding how we can associate behavior with different computational interpretations (as in my box example).
Basically, assuming that the dragonfly performs calculations to do so is question-begging. But actually, nothing on my argument precludes it doing so; however, the way in which that particular computation is implemented—in which its symbols acquire their semantics—can’t itself be computational (or else, the regress follows).
That’s right, the way that the dragonfly selects the interpretation is not computational, but natural selection.
So you are both right and wrong. Right in that the process is not wholly computational, wrong in saying that it cannot be (partly) computational because it is not wholly computational. The processing that occurs in a dragonfly’s brain (and a human brain) is partly computational, but it is implemented by natural selection.
Well, well, perhaps we’re coming to the end of the road here.
The problem here is that, as you’ve been doing throughout this thread, you assume as a given an issue that is far from a given and is, at best, contentious in the theory of computation. It’s inseparable from Searle’s much-despised (by most cognitive scientists and virtually all AI researchers) Chinese Room argument, which in turn implies that your position not only argues against CTM, but against artificial intelligence itself (and certainly against strong AI).
Here’s a snippet from the previously cited refutation of the Chinese Room argument, which I haven’t finished reading yet, but I note that there are numerous others from some eminent AI researchers. I quote this particular bit because it seems especially relevant:
Secondly (the point that will concern us), the detour through semantics allows Searle to respond to the accusation that the experiment relies, essentially, on question-begging Cartesian “intuitions” about privileged access by insisting,
The point of the argument is not that somehow or other we have an `intuition’ that I don’t understand Chinese, that I find myself inclined to say that I don’t understand Chinese but, who knows, perhaps I really do. That is not the point. The point of the story is to remind us of a conceptual truth that we knew all along; namely, that there is a distinction between manipulating the syntactical elements of languages and actually understanding the language at the semantic level. What is lost in the AI simulation of cognitive behavior is the distinction between syntax and semantics. (Searle 1988, p. 214)
Such insufficiency of syntax for semantics has been argued for variously and persuasively; most famously by Quine, Putnam, and Wittgenstein (seconded by Kripke).23 Since processes are not purely syntactic, however, the “conceptual truth” Searle invokes is hardly decisive. In practice, there is no more doubt about the “cherry” and “tree” entries in the cherry farmer’s spreadsheet referring to cherries and trees (rather than natural numbers, cats and mats, undetached tree parts or cherry stages, etc.) than there is about “cherry” and “tree” in the farmer’s conversation; or, for that matter, the farmer’s cogitation.24 Conversely, in theory there is no less doubt about the farmer’s representations than about the spreadsheet’s. Reference, whether computational, conversational, or cogitative, being equally scrutable in practice and vexed in theory, the “conceptual truth” Searle invokes impugns the aboutness of computation no more or less than the aboutness of cogitation and conversation.
Would the following be a fair translation?
“I know the answer, but I’m not gonna tell you.”
This is simply an oxymoron. The major purpose of the Watson/DeepQA project can fairly be described as extracting semantics. It’s what it does – or at least, it’s a crucial element of what the system does. Any system implemented differently that does the same thing is, by definition, functionally identical. This is true even if the proposed implementation is so outlandish that there manifestly aren’t enough computing resources in the universe to actually implement it. That’s just about all there is to say about that, other than the fact that it’s more evidence that philosophers have rarely made useful contributions to computer science.
And then my cite goes on to detail how the dude realized that logical dependence without temporal dependence doesn’t actually happen The thing about cites is that you actually have to read them.
And guess what you can always, always deduce from “iff A, then B”?
iff B, then A.
Welcome to logic! (Seriously, this is pretty basic stuff.)
The term “antecedent” just talks about where it is in the statement. It doesn’t imply actual dependence. Seriously man, remember your logic classes.
An entity can’t create itself, true. But that doesn’t apply to the computation/interpretation ‘loop’, for the following reason: it’s not a creation thing.
Deity creation fails because it explicitly presumes that creation requires a temporal predecessor. Which seems to most to be a reasonable assumption.
Your interpretation/computation thing, though, doesn’t require a temporal predecessor. It’s more like the relationship to a bulb being lit. A bulb can’t be a glowing bulb unless it’s lit. But a bulb can’t be lit unless it’s a glowing bulb. Oh no, it’s a circular infinite regress! I guess glowing bulbs are impossible. Except not, obviously, because (unlike the self-creation thing) it’s entirely possible for the two descriptions “lit” and “glowing bulb” to suddenly start applying at the same moment when the object in question changes from a state that supports the descriptions.
The same thing applies to the descriptions “the computational device is doing a computation” and “the computational device is being interpreted as doing a computation”. The minute one description applies the other does too. But there is no regress, because the terms “computing” and “being interpreted as computing” are actually just synonyms.
When the interpretation is applied to a computational device, anyway.
Speaking of which (since you offered to clarify any description I asked), could you remind me, what exactly is a “computational device”, anyway? How would I recognize one if I saw one? You argument purports to say something about computational devices, but I don’t actually recall you defining the thing you’re arguing about.
I know that your theoretical box with inputs and outputs is defined to be a computational device, and you said that it didn’t matter to your argument what the inner workings were. However I also know you threw a fit when I posited that there was a human sitting inside the box. So clearly there’s more to it. So how can I correctly identify and categorize something as being a “computational device” or not?