Wow!!! I just have to add a comment about this, whether HMHW decides to stay silent or not. This is absolutely breathtaking in its wrongness!
Because no, that is totally, absolutely NOT what I claim! I never claimed or imagined that adding more conversational lines to the lookup table would create a semantic understanding and the very idea is ludicrous. As in the example in my previous post which HMHW didn’t respond to and probably didn’t read because he took offense at something, I’m suggesting that semantic understanding starts to appear when the table becomes less and less relevant to the process as the linguistic power of the computation grows. So for the first step up you might have a table that, instead of a literal string that has to be exactly matched to what someone said and a fixed response, you have a loose collection of general semantic indicators to assess the input, and a set of generic templates defining the responses that are dynamically populated in relevant ways. At the next level up you might have powerful semantically rich input processing that doesn’t require any such rote tables at all (computationally intensive “deep semantics”) and response generation that is equally dynamic.
Now, according to the trite “all computation is a table” concept, that highest level of AI program could in theory (but never in practice, of course) be converted to a gigantic lookup table. Fine, but nobody cares, but that would be the table that hugely grows in size and complexity to reflect my advanced AI model, not the trivial fixed comment/response table! This is why I wanted to drop the “all computation is a table” nomenclature – to avoid exactly this kind of utterly chaotic confusion.
Well, according to your cite, it is, though, so you’re kinda lacking in the consistency department there.
But regardless, you’re right that it’s not; but you’re wrong to think it ought to be for the dependence to be circular. The interpretation is necessary for the computation—without the interpretation, no computation. Now, you claim that the computation is necessary for the interpretation—without computation, no interpretation. But then, that just means that the interpretation is necessary for the interpretation. Which is simply circular.
The thing is that the consequent is true because of the antecedent being true. Now, you say that the antecedent is true because the consequent is true. Hence, the antecedent is true because the antecedent is true. As in, the literal definition of circular reasoning.
You claimed that lookup tables ‘should be able to derive semantics from arbitrarily complex language structures’. You did so to support your argument that Watson’s semantic competence is proven by its performance on Jeopardy, which would otherwise be false. I showed that lookup tables don’t have semantic understanding, and hence, that we have—based on its Jeopardy-performance—no reason to believe Watson has any semantic competence. Hence, my argument isn’t threatened by the appeal to Watson’s capabilities. To that, you merely replied ‘that’s false’. If I misunderstood your intent there, I’m sorry, but I don’t see how else to interpret it.
Well, the obvious answer to get rid of the problematic continuum is simply to admit that both ends of the continuum don’t have any semantic understanding; this understanding is simply a faculty that is completely separate from computation. Then, the lookup table and the supposed highest tiers have exactly the same amount of understanding—none—and the hierarchy is merely one of more and more clever tricks to save space on the lookup table—indeed, each higher tier computation could be understood as a compressed version of the lower tier.
Sure. If detailed arguments, copious cites to the relevant literature, pointing out the existing controversies and the like constitute ‘flippant dismissal’, then well, I guess I’m guilty of that. The clearly superior route, I guess, would just have been to indignantly claim that because it’s been the dominant paradigm some time ago, computationalism clearly can’t be wrong, because every paradigm ever has unquestionably turned out right.
OK, fine. Then, I suppose you also no longer claim that Watson’s performance on Jeopardy implies that it has semantic competence? Because if you agree that lookup tables don’t have any semantic competence, then it follows that Jeopardy-competence doesn’t imply semantic competence.
Then, we can drop the irrelevant Watson digression (although it’s kinda strange that you kept it going even after I pointed out that Watson doesn’t establish that computers can possess semantic competence), and go back to the actual argument. Which still is that you need to be able to provide some way of implementing f uniquely to match the competence of the human mind.
Seriously though, if you didn’t mean to argue that lookup tables do possess semantic competence, you shouldn’t for instance do things like reply to a paragraph of mine starting with:
By saying:
Because that sorta reads like you’re saying that it’s nonsense to claim that there’s no semantic understanding going on in a lookup table.
My cite assessed the concept of logical dependence as you imagine it and found the concept to be functionally inseparable from causal dependence with a temporal component. Because it is - the kind of causal dependence that doesn’t allow for stable circular casualty is as incoherent as an invisible pink unicorn.
I like how you are ignoring the other text in the post that you’re replying to that very clearly demonstrates that you’re wrong. Without a temporal discontinuity to make it impossible for the thing to be its own cause due to the thing being literally absent, the thing can be its own cause.
I like how you are ignoring the example that immediately followed the quoted text that very clearly demonstrates that you’re wrong.
And I also like how you’re ignoring that I utterly shredded your nonsensical definition of “computation”, demonstrating that your entire argument is literally gibberish. No refutation, just more la la la I can’t hear you.
It’s like you’re trying to make me think you’re arguing disingenuously.
Now, putting aside the utter incoherence of the non-definition of “computation” for a moment, I think it is worth pointing out that for actual cases of self-sustaining circular causality, the way these things start is an outside incident. For a very clear example of this, consider my very clear example of circular causality, the ideal bike chain. The way an ideal bike chain continues spinning is that each of the links is pushing the other links, but the way that an ideal bike chain starts spinning is somebody external applies force to one or more of the links. (That is, they grab the chain and yank it, or pedal one of the gears it’s wrapped around, or something like that.)
With the exception of causal loops that literally have been running since the dawn of time, all causal loops will require an external impetus to kick them into gear, before they bootstrap themselves up into self-sustaining circularity. This is easy to see with modern computers; as has been noted all computer programs with execution loops would qualify as circularly self-sustaining systems, and each and every one of them started as something that wasn’t yet a self-sustaining process as the startup processes began executing. An operating system itself bootstraps itself up from low-level system processes, all of which are triggered by power being run through the chip telling it to pull the first commands from the boot sector.
Comparably, it’s quite clear that a fertilized human egg isn’t conscious (remember the discussion is of a physicalist model), and that only over the course of development is the brain ‘bootstrapped’ into conscious activity as the brain cells are developed and electrochemical impulses start running through them.
Bringing this back to that bike chain, the only way for the chain not to require an external impetus to start spinning would be for its component parts to all be moving at velocity when they came into existence. This is actually theoretically possible - matter has to have some velocity and position when it comes into existence, so it’s possible for it to come into existence with the position and velocity of parts of a self-sustaining circular system in mid-process.
This is actually relevant to computer programs and simulations, by the way - the system is created with all the parts in place, and then everything kicks into motion. This is actually how computer programs work - the execution loop and all important state is loaded into memory, and then execution starts. Of course with computers there’s the external agent of the booting system, but when we start talking about computer consciousnesses, once we know how they work it’ll be as easy as arranging the memory state in advance and hitting ‘go’. Computation and self-interpretation will initiate simultaneously.
OK, I was going to say that there are nuances of misunderstanding here, but I think it’s more like great gulfs of misunderstanding.
Let’s consider the example of my loosely sketched out advanced AI program in my last post (sorry about the snark), the culmination of what started out as a trivial table lookup giving a fixed response to an exactly matched input comment, and ended up through a series of upgrades being a program exploiting deep semantics to give interesting human-like responses to the things you said to it. I maintain that such a program can be said to have natural language understanding at the semantic level. You might disagree, but at least bear with me and acknowledge that it’s a tenable concept.
Now, as for table lookups, it could mean different things. It could mean Turing machine-like lookups as it processes symbols in conjunction with a state table, but that’s not what I think either of us means. My agreement with your “all computation can be a table lookup” relates simply to inputs and final outputs. So in theory (but again, never in practice) I could run this AI through every possible combination of English statements that anyone could ever say and record its response, and build a gigantic table from that, possibly consuming all the atoms in the universe to do it.
So now I have a physically impossible theoretical machine that does nothing but table lookups and mimics the performance of my AI program. The question at hand is my statement that it “should be able to derive semantics from arbitrarily complex language structures”.
If one supposes that the original computation was, say, the prime factorization of an integer, one might observe that the table lookup machine takes exactly the same amount of time to factor very large numbers as trivially small ones, and that it has no calculating logic in it at all. So it would be fair to conclude that it’s not actually doing any calculation at all. But clearly, the original machine was, in the same way that a dictionary doesn’t “know” the meanings of words but it’s captured the knowledge of those who do.
The salient point here is that you might quibble with my statement that a sufficiently complex lookup table “should be able to derive semantics from arbitrarily complex language structures” because a simple (if theoretical and physically impossible) machine that simply looks things up in a gigantic table cannot be said to “know” anything. But the argument that because it possesses no semantic competence, neither does its progenitor, because they are equivalent, is an argument that holds no water. The argument fails for the same reason as the factorization example above. The table lookup machine and the real calculator are functionally identical, but one of them actually knows how to calculate and produced the results, while the other doesn’t.
I agree, and I retract that position, in view of my reasoning above. But it also disagrees with your “equivalence” viewpoint so gets you no further ahead in your Watson-bashing.
Watson and Understanding:
I was pondering HMHW’s Watson point about getting US city=Toronto wrong and the following occurred to me:
The mistakes made by Watson listed above seemed like clear cut examples of a machine that doesn’t understand. But, humans make mistakes also, even about things that they do understand (but maybe incompletely or maybe a temporary error).
Is there a difference?
I think there are a few concepts involved in this issue: understanding, knowledge, and ability to self-monitor for state of understanding and knowledge.
It seems like understanding is a term that describes the extent to which we can map some input to our internal model of the world and confirm that it results in a still consistent model without creating contradictions with an entire network of learned information and patterns.
So the Toronto error violates a portion of the model that most people in North America (and probably broader) learned at a young age, which implies that the model doesn’t exist inside Watson.
But it could also be a self-monitoring error. Generally, humans have a pretty quick and accurate system of identifying whether they “know” something. This doesn’t mean the knowledge is accurate, just that we know what’s in our databank. This could have been an error in the process that says “don’t answer this one because you have absolutely no clue!”
Understanding in General:
While I think that “understanding” is a real thing (i.e. someone with understanding about some topic can make better predictions and inferences about the world it relates to), is it possible that the sense of understanding we get is just a self-monitor of the whether the topic fits consistently within our internal model of the world or not, and not really something special? If we can plug the idea in and connect it to a bunch of other nodes and relationships and no conflicts detected, then we get the “all good” signal else we don’t.
Your cite ‘argues’ that each relation of logical dependence is a relation of temporal priority (or so you claim); you’ve said that there’s a relation of logical dependence between the interpretation and the computation; hence, if you believe your cite, you ought to believe that there’s a relation of temporal priority between the two elements.
Which is what you’ve been doing for this entire thread.
This isn’t about causality; it’s about logical necessity, about conditions that needs to be fulfilled for a proposition to be true. Take the old canard:
Socrates is a man.
All men are mortal.
Socrates is mortal.
In order for the conclusion to be true, each of the premises must be true. If now the truth of the premises hinges on the truth of the conclusion, you’ve simply not said anything at all. That ‘Socrates is a man’ is necessary in order for ‘Socrates is mortal’ to be true; had Socrates not been a man, but rather, an angel, the conclusion would be false. It’s this relation that’s relevant.
So you’re proposing:
P implements C because there is an appropriate interpretation I of P.
Which you’re supporting with:
There is an appropriate interpretation I of P because P implements C.
In other words:
A because B
B because A
So, A needs to be true in order for B to be true; B needs to be true in order for A to be true. Circular reasoning works, because circular reasoning works; and because circular reasoning works, circular reasoning works!
The chain-example demonstrates nothing. Each link is in motion quite independently of the others. For suppose there were only one link on a circular, frictionless track: once set in motion, it would just continue to go round. The same goes for all the other links; hence, none of the links moves because it is being ‘pushed’ by the others, but merely, because of inertia.
And as you say, the chain has to be set in motion by external action. Thus, the state of motion of each of the chain links is fully explained by that external action.
You’ve proposed that a function for which no computation needs to be performed can be implemented by a system which does no computation. So, uh, yeah, I guess?
The computation depends on the interpretation to be present, so this simply doesn’t start. Even if you want to have a chain of computations, the first one needs to be there due to some interpretation; as long as that interpretation is there, it can ‘interpret’ some other system as doing interpretation, meaning that some other system performs some computation, and so on, but without that interpretation, the entire chain is simply nonexistent.
And no, you can’t claim that the interpretation just ‘gets things going’: for suppose that I[sub]1[/sub] realizes computation C[sub]1[/sub], which implements I[sub]2[/sub] to implement C[sub]2[/sub]. Without I[sub]1[/sub], there is no C[sub]1[/sub], and hence, no I[sub]2[/sub]; this also holds if C[sub]1[/sub] = C[sub]2[/sub]. So you can’t start off with some interpretation, and then just leave things coasting (nevermind the question of where that initial interpretation somewhere around the dawn of time came from—do you then have God as a prime interpreter?).
This isn’t an argument I’ve ever made, however. My argument—repeated many times now—is merely that since a lookup table could equal Watson’s Jeopardy-performance, and a lookup table has no semantic understanding, Watson’s Jeopardy-performance isn’t sufficient to conclude that it possesses semantic understanding.
Thus, I’m simply not saying that the lookup table proves that Watson doesn’t have such understanding; it merely dispels your assertion that my original argument must be somehow wrong, because Watson has shown semantic understanding in Jeopardy, and my argument entails that no computation could interpret symbols in this way (which also isn’t exactly right: my argument claims that without interpretation, there’s no computation, so the attempt to interpret computations computationally simply never gets off the ground).
So your appeal to Watson’s capabilities simply doesn’t do anything to dispel my argument.
So a better (though still imperfect) analogy here would be a bike chain on very high-friction gears: it moves only as long as a force is being applied. As long as that happens, you have your ‘chain of causation’: the first link is being pushed by God’s hand, the second by the first, the third by the second, and so on. But, as soon as that force stops, it’s simply not the case anymore that the links push one another in order to keep the chain going; the chain just stops.
Everything else, of course, would logically be nonsense: the force exerted by the first link on the second, and so on, would be transferred back to the first, and supply that very force that is exerted on the second, and so on, thus making the force its own cause. This is exactly like pulling yourself up by your own bootstraps; and exactly as impossible.
If this one wrong answer leads us to conclude that Watson has no real understanding, what should we conclude from the vast majority of its correct responses where it siginficantly outperformed the record-setting human Jeopardy champions?
The “strangeness” of the mistake, where an apparently super-smart machine blunders on something that a child would know, should not be taken as any kind of evidence about the presence or absence of true understanding, because the knowledge models are simply different. Consider the self-taught AlphaGo system which learned to play Go at a world championship level with no human intervention. Its playing style has been described as “alien” and “from an alternate dimension”. It’s distinctly non-human in character, yet good enough to beat world champions.
Watson is actually very good at knowing what it doesn’t know and at confidence rating and ranking its responses. I’m not sure what logic and/or Jeopardy rules prompted it to answer at all, but that answer had a very low confidence rating. As for the “US city” contradiction, David Ferrucci later explained that Watson specifically doesn’t discard answers based on one apparent contradiction but considers all the evidence in its totality. There could be (and indeed there are) US cities named “Toronto”, and Toronto has US connections like a baseball team in the American League. This one case was a perfect storm of cues that just sent Watson in the wrong direction.
So you’re just saying that your argument shows that Watson’s performance “isn’t sufficient” to prove semantic understanding while I described your argument as trying to show that it definitely doesn’t. OK, fine. But that version fails, too. The problem is that such a lookup table could never exist in the real world, whereas Watson is a real system. The argument that “B is functionally equivalent to A, and B has no semantic competence, therefore no behavior of A is sufficient to establish its semantic competence” is not an argument at all if B cannot exist. For trivial systems the argument holds. When there are not sufficient resources in the universe to build B, we have to dispense with the objection and evaluate the real system on its merits. Because otherwise, carrying this through to its final logical conclusion, we have to conclude that because any computation is reducible to a trivial lookup table, any computation must be regarded as trivial.
Where would you propose that this symbol interpretation should be happening? Watson is a closed system, just like Ken and Brad. It receives the same inputs (in principle, at least) and produces the same kind of outputs. I’ve already outlined in some detail the Watson design goals around semantic understanding, which you merely dismissed by claiming that it’s a different kind of semantics (I acknowledged that the word has a special meaning in computer science, but it’s clear from the context of the cited research document that they’re talking about linguistic semantics).
So you have to square your view that Watson cannot have semantic understanding (at least, that’s what I make out of that paragraph) with the fact that it was specifically designed to have it. If such things are not possible that would certainly be news to the principal investigator and project lead for Watson. Let me quote a snippet from his bio directly:
[David Ferrucci] has been at the T. J. Watson Research Center since 1995, where he leads the Semantic Analysis and Integration Department. Dr. Ferrucci focuses on technologies for automatically discovering meaning in natural-language content and using it to enable better human decision making.
Of course you can try to wiggle out of this by claiming that Watson discovers meaning through various forms of syntactic-manipulation trickery. This is the typical argument of AI skeptics like Searle. But the reality is that this is something of an oxymoron, because if a system infers meaning in natural language and competently acts on that meaning, it leaves little room for claiming that it lacks semantic understanding.
Now here’s a course currently being offered at Stanford that some of the participants in this thread might be interested in …
PHIL 38S: Introduction to the Philosophy of the Mind
Could people in the future upload their conscious minds to a computer and, so to speak, live forever? Do we have an obligation not to delete a conscious computer’s software? How we answer these questions would seem to depend on how we answer more basic questions. Can a machine have thoughts? Can a rock have thoughts? Would a machine with thoughts have consciousness? … https://explorecourses.stanford.edu/search?view=catalog&filter-coursestatus-Active=on&page=0&catalog=&q=PHIL38S
Well… that is from the Philosophical department. [Seinfeld again]Not that there’s anything wrong with that [/Sa], as pointed before there is a use for philosophy in telling us about the limitations of a simple system without semantics. The problem comes when trying to apply that limited criticism as if that were a show stopper or as if computer scientists have not looked at those arguments. They did, and basically said a collective “Duh!”
It looks to me that computer scientists are aware of those limitations and encountered them before, and instead of curling up and giving up they continue working on CTM and making progress.
So, one needs to go next door from the Philosophical department to the Computational Semantics Laboratory.
No. The physical realizability of a device doesn’t—can’t—have any impact on the logical entailment of semantic competence from behavior. That’s just a category error.
And really, think about what that would mean. Consider two possible universes: one with a radius of about 5*10[sup]10[/sup] light years, and an infinite one. The first one probably doesn’t contain enough material to build a Watson-equivalent lookup table; the second one does. So, in the second one, you’d grant my argument, while in the first one, you’d consider it erroneous.
However, both are possible ways for our universe to be. Although the radius of the observable universe is about 5*10[sup]10[/sup] light years, the simplest model compatible with the data is actually of an infinite universe. Consequently, you’d have to hold that whether Watson’s Jeopardy-competence logically entails its semantic competence depends on what’s behind the horizon of our universe. In other words, you’d stake the validity of a logical inference here and now on empirical data that couldn’t have any possible effect on the system we’re talking about.
Really, you should consider the consequences of the things you post, otherwise, you’ll just go on trying to justify the absurd with the ridiculous.
And my box receives the same inputs and outputs whether it computes f or f’.
Which is no problem at all: if Watson’s designers actually believe that Watson is capable of symbol interpretation in the same way that human minds are, they’re simply wrong. Compare: homeopathic remedies are specifically designed to heal the ill. That fact does not at all feature in any sound argument that they actually do; pointing to it as supporting their healing capacities is simply fallacious.
No, it’s a different situation: I have an argument—unchallenged so far—that no computation could on its own ever do that kind of thing. So in fact, since I know of no way to dispel the argument’s conclusion, I am forced to accept it.
As in Putnam’s case, once I understood the interpretation-relative nature of computation, the idea that computation could ground mind—or more accurately, its interpretational capacities—just went right out of the window.
Literally everything in this paragraph is wrong. Do you remember nothing from your formal logic classes?
Conclusions can be true even if the premises are all false.
Socrates is an angel.
All angels are mortal.
Socrates is mortal.
The notion that the truth of the premises could be dependent on the truth of the conclusion is nonsensical; all premises (and conclusions) are factual statements which are either true or false on their own. If Socrates is mortal, he’s mortal regardless of anything else.
Socrates could have been mortal even if he wasn’t a man; there could easily be a third class of things that is not human and is also mortal.
Logical arguments aren’t about relations, they’re about statements of fact. Socrates is mortal or not. Socrates is a man or not.
Consider the statement “All men are mortal”. This statement also implies the following statement: “if something is not mortal, then it is not a man”. By your thinking, that means that Michael’s immortality is causing him to be inhuman - it must be the cause, because it’s the antecedent!
There is no “because”. By your argument that utterly fails to define “computation”, computation occurs when an “appropriate interpretation” is made of the so-called computational device’s output.
Logical implication isn’t causality. As is obvious to everyone who remembers anything about logic.
Oh look more special pleading.
Every single function in existence can be implemented by a static sheet of paper. The paper just needs to have a lookup table printed on it: inputs to the left, outputs to the right. An observer can look at the table and interpret its output - selectively caring about one line of the table over the other lines is just part of the interpretation.
So there are no functions that require computation - either that or a sheet of paper can be a computational system.
Based on my read of your argument, you’re taking the latter approach - computation is literally in the eye of the beholder. The beholder perceives the behavior of something and imbues the something’s behavior with meaning because it draws conclusions based on what it observes and/or makes decisions based on the observations.
Under that definition, then anything being observed can be considered a computational device, as long as it can be observed - regardless of the observed thing’s complexity, or lack thereof. A slide rule only has three moving parts, and is definitely a computational device!
Or is it?
Yeah, answer that for me, would you? Is a slide rule a computational device?
You know, I’m curious about how you view those devices known as “computers”. Are they computational devices?
When a computer is powered on, at what point do you think it starts being interpreted by something? What’s it doing prior to being interpreted?
What if there’s no monitor plugged in? What if the computer is not observed by humans at any point? Is is doing computation?
My thinking is that it’s silly to say that computers aren’t computational devices, so they start computing as soon as they start operating. So where’s the interpreter? Who’s the interpreter? And when did it start interpreting? I say the box starts interpreting itself. My argument? Because there’s literally no other interpreter available.
Gee thanks, but no, that misses the point. Saying there’s not enough atoms in the universe was just a whimsical way of saying that the table lookup becomes an absurdity for non-trivial computations, and that’s not the frivolity you make it out to be. I could argue that you couldn’t build the table in your expanded universe anyway (or in this one, for that matter) because the different parts of it couldn’t effectively communicate with each other before the heat death of the universe. But that’s just silliness that gets us farther from the point.
The point, once again, is that the lookup table argument is a mere philosophical truism that exploits the deterministic nature of computation to make the frivolous argument that any computational results can be captured in a passive lookup table. It offers no novel insights, and we readily observe that in the real world, non-trivial systems cannot be mapped this way.
Most computer scientists would disagree with your view that physical realizability is irrelevant. If you feel that I’m “trying to justify the absurd with the ridiculous”, let’s enlist the aid of David Chalmers, some of whose positions I’ve disagreed with, granted, but here he can be helpful. This is from his paper A Computational Foundation for the Study of Cognition. There’s far more detail on implementation theory in his paper but this will suffice:
The mathematical theory of computation in the abstract is well-understood, but cognitive science and artificial intelligence ultimately deal with physical systems. A bridge between these systems and the abstract theory of computation is required. Specifically, we need a theory of implementation: the relation that holds between an abstract computational object (a “computation” for short) and a physical system, such that we can say that in some sense the system “realizes” the computation, and that the computation “describes” the system. We cannot justify the foundational role of computation without first answering the question: What are the conditions under which a physical system implements a given computation?
Perhaps to avoid incessantly repeating myself, we can turn to Chalmers again:
It will be noted that nothing in my account of computation and implementation invokes any semantic considerations, such as the representational content of internal states. This is precisely as it should be: computations are specified syntactically, not semantically. Although it may very well be the case that any implementations of a given computation share some kind of semantic content, this should be a consequence of an account of computation and implementation, rather than built into the definition.
If that sounds awfully familiar, it’s because it’s precisely how I described your box. Chalmers sees no difficulty with a device like that being interpreted as implementing multiple computations (say, f and f’) as long as it’s not interpreted as implementing all computations.
That said, Chalmers is at odds with Fodor (and the early Putnam) and other computationalists over the matter of semantic content, Fodor’s mantra having been “there is no computation without representation” (not that he found this in any way an obstacle to CTM, since the representations in his model are mental representations whose syntax is manipulated by mental processes, sans homonculus). Some of this disagreement, as Chalmers notes, is terminological, and some of it philosophical differences over computational theory, but neither view supports your homunculus argument that an external observer is required for computation to occur; Fodor and Chalmers and of course nearly everyone in cognitive science supports the computational account of cognition, though they differ in their approaches. But where Chalmers agrees with all the others is on the following two foundational issues – both of which I assume you consider to be complete nonsense that I should just stop posting about:
[ul]
[li]Computational sufficiency, stating that the right kind of computational structure suffices for the possession of a mind, and for the possession of a wide variety of mental properties.[/li][/ul]
[ul]
[li]Computational explanation, stating that computation provides a general framework for the explanation of cognitive processes and of behavior.[/li][/ul]
And with that, I think I’m probably done here, unless something genuinely new comes up. Thanks for the stimulating dialog, not so much for the occasional insult.
I remember that circular reasoning is invalid, so I seem to remember more than you do.
Sure. And yes, I was imprecise in my expression. But it’s clear that the truth of the premises (and, before you start again, the validity of the argument) is necessary to establish the truth of the conclusion with certainty. That is, by virtue of the truth of the premises, we know the truth of the conclusion. If you now slip a dependence of the truth of the premises on the truth of the conclusion into the argument, it just follows that we haven’t established the truth of either.
Again, the thing is that the truth of the premises guarantees that of the conclusion; if the premises themselves depend on the conclusion, as they do in circular arguments, nothing has been established.
So, you’re claiming that the truth of ‘P implements C’ is established by the truth of ‘there is an appropriate interpretation I of P’. And that’s fine: if ‘there is an appropriate interpretation I of P’, then it’s indeed true that ‘P implements C’.
So then you want to go and establish the truth of ‘there is an appropriate interpretation I of P’. To do so, you propose that this is true if ‘P implements C’. These two, then, in combination, just don’t establish anything beyond ‘P implements C’ if ‘P implements C’, which is vacuous, and in particular, tells us nothing about whether P actually implements C, or any computation at all.
That said, if you’re accepting the validity of such circular reasoning, we can be done with this quite quickly:
I am a perfect reasoner, because all of my arguments are right.
All of my arguments are right, because I am a perfect reasoner.
Having thus established that all of my arguments are right, you’ll grant me that my arguments in this thread are right, I presume.
No. Causality has nothing to do with it:
It works with ‘when’ just as well. When the system is interpreted the right way, it implements a certain computation. When it implements a certain computation, it’s interpreted the right way. From that, all that follows is that when the system implements a certain computation, it implements a certain computation—which is true, even if the system doesn’t in fact ever implement any computation.
See the above: it’s not about causality. And while you’re of course free to continue embarrassing yourself, you really shouldn’t lecture anybody on logic while holding that circular arguments establish anything at all. (And don’t now go claiming it’s not circular at all; your bike chain analogy shows that you full well understand its circular nature—just not bike chains.)
Take the ‘argument’ again:
P implements C because there is an appropriate interpretation I of P.
There is an appropriate interpretation I of P because P implements C.
You’re arguing that from the above, you can deduce that P implements C. But this is simply false: all you can deduce is the vacuous statement ‘P implements C if P implements C’.
Imagine I systematically move through all the possible input-output combinations of my box, and write them down in a table such as the one that’s been posted here a couple of times now. Then, I can use that table to read off the results for both f or f’. Does that mean the table implements the computation? No, the computation was done when I operated the box; it was simply stored for future reference (you might know the term ‘pre-computation’ for this sort of thing). So no, the paper isn’t a computational device; it merely acts as a storage device. What’s been stored, and by extension, what has been computed, is then of course dependent on interpretation—as it must be.
While anything can be interpreted as implementing some sort of computation, it’s not the case that anything can be interpreted as implementing every computation. Just examine the mappings that I’ve proposed for my box: they’re a one-to-one correspondence between inputs and outputs, and their semantics. So, a different input/output behavior would mean that these mappings are no longer applicable. I couldn’t compute f, for example, with a device that only has two states.
As anything, it’s following the laws of physics to perform some particular physical evolution.
As an analogy, it might help to think of a one-time pad. Using a one-time pad, you can add a n-bit random number to encode an n-bit message, simply by adding both bitwise. The key is then the interpretation: only if you possess it, is the plaintext accessible. But different keys lead to different interpretations: there exists a key to ‘decode’ the ciphertext into any n-bit plaintext. Thus, the ciphertext alone doesn’t possess a fixed meaning; only in combination with a key does the meaning appear.
It’s the same with computation. You have a physical system, and need to use the right ‘key’ to ‘decode’ it into performing a given computation; that is done in interpretation. Hence, just as the ciphertext has no meaning without the proper key, there is no computation without the proper interpretation.
It computes only if it’s interpreted as computing.
Does my box interpret itself as computing f or f’? Are then all the possible computations of the box ‘self-interpreting’, and thus, we again have the situation in which every possible computation a system can be interpreted as performing is in fact performed by the system—leading once more to the conclusion that you’re far more likely to be a random pattern in a puddle in the sun than the sort of being you believe you are?
Well, that’s not a counter to my argument, though. The lookup table is logically possible, and able to replicate the performance of Watson; and thus, your argument that Watson’s performance implies some semantic competence is simply demonstrated to be erroneous, due to the lacking semantic competence of the lookup table. Since you originally believed that Watson’s competence could be demonstrated in that way, that ought to be a novel insight for you.
I said, irrelevant for whether a logical inference works. The computer scientists who’d disagree with that should really re-evaluate their life choices.
There’s actually a bit of interesting history associated with that paper, and in particular its successor, Does a rock implement every finite-state automaton?. In that later paper, Chalmers brings his account to bear on Putnam’s argument that every open physical system implements every finite state automaton—which is essentially a strong version of the argument I’ve been advocating.
So, the first thing to take away from that is that Chalmers considers the sort of attack made on computationalism by arguments such as Putnam’s (and mine) to be serious enough to merit an in-depth discussion of the problems they pose, and to propose a relation of implementation in response.
As for this, I think his attack on Putnam’s argument is essentially successful. As I pointed out earlier:
The sort of strategy Chalmers uses to dispel the threat of triviality is the most common one: find restrictions on the admissible implementations, such that the triviality arguments don’t apply anymore. His restriction is related to causal/counterfactual structure: an interpretation (in my terms) is only admissible if the system interpreted mirrors the causal/counterfactual structure of the computation to be implemented. (That’s also why I was reluctant to admit every possible interpretation of a system—leading to infinitely many computations associated to it: most of these will, in fact, run afoul of such constraints.)
This, I think, succeeds in calling Putnam’s case into question (although that, too, can and has been questioned; for one, it’s not actually clear why interpretation should be thus restricted, other than for the express reason of avoid such arguments). However, it doesn’t work for mine.
Basically, the issue is that my mappings (unlike Putnam’s) preserve the causal/counterfactual structure of the box, and yet, still yield different interpretations. The structure is cashed out in the following way. In order to have a valid implementation, it’s not sufficient to merely map the physical evolution of a system to one ‘run’ of the program (more accurately, its execution trace), but rather, the mapping must also hold in cases where the initial state might have different; that is, if the computation would have resulted in f(x) given x, and in f(x’) given x’, then it’s not sufficient to merely map the computation of f(x) onto a certain evolution of a system; the mapping must be such that had the input been x’, it would have resulted in the output f(x’).
This is the case for all of my interpretations. Hence, Chalmers restriction doesn’t suffice to single out a unique computation in my example.
One should also note that Chalmers must be an uneasy bedfellow for you: after all, he’s convinced that computation in purely physical systems can never produce conscious experience. Thus, he’s variously appealed to either property dualism or panpsychism to bridge the gap—both, essentially, with a proposition that information just has a mental component to it, that isn’t reducible to its physical instantiation. So Chalmers himself doesn’t believe (perhaps anymore) that his account of implementation suffices to make (physicalist) computationalism a viable theory.
Sure, but Chalmers supposes that his account results in complex computations essentially being singled out uniquely, which my argument shows isn’t the case (as the technique I’ve proposed will yield multiple computations of exactly the same complexity, satisfying his state-transition rules). So for a brain, we should not expect a single complex computation leading to a mind being associated with it, but rather, a whole plethora of computations that are equally complex, and which thus should equally well have the capacity to implement minds. To assume that only one (and exactly one per brain) does would be to believe in a miracle.
But if that’s the case, then we should expect a multitude of different minds to be associated with any given brain, most of which will have experiences radically different from yours; and then also, we should expect that the experiences you have don’t actually connect to anything out there in the world, leading to a kind of solipsism.
Just wanted to add one more thing, that of course exactly the same argument could be made to discredit the performance of a machine that passed the Turing test (or, obviously, any other AI). Just as a bit of a sidetrack here’s a paper on that subject that may be of interest or possibly amusement, published in Minds and Machines in 2014:
The paper is quite lengthy and I only skimmed it, so there may be something material in it that I missed. It goes through an amusing digression about the impossibility of building such a table, positing that it could perhaps be built in a hypothetical quantum domain with a nearly infinite number of particles. It then forges ahead with an analysis and at the end of it all the author comes to this rather remarkable conclusion (emphasis mine):
Our conclusion is that the Humongous-Table Argument fails to establish that the Turing Test could be passed by a program with no mental states. It merely establishes that if real estate is cheap enough [and] the lifespan of a psychological model is predictable, and all its possible inputs are known, then that model may be optimized into a bizarre but still recognizable form: the humongous-table program. If the model has mental states then so does the HT program.
This is exactly the argument that you bring against Watson, and the parallel here would be that your lookup table argument fails to establish that Watson lacks semantic competence. One can of course argue that the paper’s conclusion is wrong, but that’s not the point – the point is that your argument is not as conclusively self-evident as you claim it is, and that this is a murky area in which one can have legitimate disagreements.
Of course, one could easily just read the argument as establishing the converse: if it’s true that the lookup table would have mental states if the behavioral equivalent has them, then well, that just means the behavioral equivalent also doesn’t have them.
Surely, at the very least, if somebody were to claim that a lookup table could, just by virtue of its humongousness, somehow implement an understanding of the elements it lists, one would expect a glimpse of how that could conceivably come about, no? But (having only skimmed the paper), it seems to me the strategy is exactly one of establishing the computational equivalence between the original program and the lookup table, which then, combined with the hypothesis that the original program is sufficient to implement mental states, yields the conclusion that the table also possesses them. But if that’s the case, I’d rather view it as a reductio of the claim that the original program had mental states, than that the table has them.
Sure, but the reason the Turing test was the basis of this analysis rather than some other arbitrary computational paradigm was the premise that the successful machine has them; after all, Turing proposed it specifically to answer the question, “can a machine think?”.
Now, one may quibble with whether the Turing test definitively establishes that fact; Turing certainly believed it would, as has much of the computer science community ever since. What you surely must recognize is that part of the enormous baggage that your lookup table argument is burdened with is the clear implication that the Turing test does not establish thinking, because thinking has a whole host of implications like mental states and true understanding – deep semantics, if you will. Your argument is therefore burdened by its inevitable corollary that the Turing test is worthless, which is a heavy burden to bear. You can practically hear its credibility creaking under the load.
Of course circular reasoning is invalid. You just don’t know what the term means, and the only place your argument approaches problematic circular reasoning is the fact that your shitty definitions are very possibly assuming your conclusions.
Seriously, the two statements
A ⇒ B
B ⇒ A
are not a circular argument. That’s just the long form of the statement “A iff B” (aka “A ⇔ B”). (And also “B iff A” aka “B ⇔ A”.) Note the so-called “logical dependence” present, which doesn’t imply temporal dependence and which does not imply precedent. If “logical dependence” did what you imagined it does, then A⇒B and B⇒A here wouldn’t imply A⇔B and B⇔A, but they do, because that’s just how the rules of logic work.
Oh, I think I get it. You seem to think that if an an argument fails to establish that X is true, then that means that X has been proven to be false.
That’s not how logic works, obviously.
I’ll confess right now that I’ve been trying and largely failing to translate your argument into a strict logical form. It doesn’t help that “infinite regression” isn’t a concept that exists in logical argument. And of course it doesn’t help that A⇒B and B⇒A don’t lead to a contradiction or a circular argument, despite your fervent belief that they do.
I invite you to try formalizing your argument. First suggestion: don’t use the term “because” - use proper conditionals instead.
This argument doesn’t prove that all of your arguments are right - but it also doesn’t prove that it’s not the case that all your arguments are right. Absent other evidence it would be entirely possible that you are a perfect reasoner and that all of your arguments are right.
The fact you don’t realize this, though, proves that you’re not a perfect reasoner.
I’m not arguing that, obviously. I’m not making any positive assertions about what P is doing at all - after all, I don’t know what P actually is. It could be a rock for all I know. Or a human brain.
What I am asserting is that your argument doesn’t prove what you think it does, due to myriad logical errors.
Oh, are you willing to define what a computational device is now? I’d be interested to see this.
Is a slide rule a computational device?
Is a Pentium 75 chip a computational device?
Is a human brain a computational device?
I look forward to your answers to these questions.
There’s two possible takeaways here.
One, of course you could interpret f from something that has only two states. A piece of paper with a lookup table on it printed on it has only one state and you can interpret f from that.
Two, why are you suddenly talking about a device being interpreted as implementing every computation? Who proposed that devices existed that can do that? Who proposed that devices should be able to do that?
(Actually devices do exist that can do that. You can look at a blank sheet of paper and interpret it as implementing f. You might need to be a bit insane and/or hallucinatory to do it, but that doesn’t mean you can’t interpret it that way. Interpretation is in the eye of the beholder.)
That you think that a “one-time pad” is any sort of analogy to a full-fledged computer is amusing, :rolleyes:-worthy, and predictable. And that you’re special pleading all the computers in the universe down to unrelated atomic interactions is demonstrative of the absurdity of your position.
Suffice to say that there are many computers that as a result of their computations move things around in the physical world. Without asking a human’s help in interpreting their data first. And things that are moving around in the real world are still moved even without external interpretation, right? So a self-driving car that reacts to you punching your address into is is producing no output at all that is subject to interpretation when it drives you to your destination.
Yes, I realize that you’ll happily claim that the computers in self-driving cars don’t do computation. I hope you realize that you will never be able to convince any rational person that that’s true.
Exactly my point!
A thing computes only if it’s interpreted as computing. (premise, based on your definition)
A computer is clearly computing, even at time T before they present any detectable output to any user or anything else other than themselves. (premise, based on observation of reality/knowledge of how computers work in reality)
The computer is being interpreted at time T. (1,2)
Interpretation requires that the interpreter is observing the thing being interpreted at or before the time the interpretation occurs (premise, based on definition).
Nothing but the computer has observed the computer’s behavior by time T. (2)
Nothing but the computer can be interpreting the computer at time T. (5, 4)
The computer is interpreting the computer at time T. (6, 3)
This argument is valid, so the only way to refute it is to declare it unsound by attacking the premises, lines 1, 2, and 4. 1 and 4 are straight from your definitions, so you have correctly deduced that the point of attack is to try and convince us that computers don’t compute if you cover your eyes and don’t look at them.
Unfortunately that’s only a convincing assertion if the people you’re trying to convince don’t consider it completely retarded. Sadly for you, I find the “peek-a-boo, no computing for you!” argument to be completely retarded.
-I don’t know what your box is doing inside itself; your argument left that undefined. It’s entirely possible that it’s examining and reacting to its output wires, and also entirely possible that it isn’t.
It’s actually highly probable that any possible implementation of the box you can name has some interpreting going on within it; after all there are input switches, and most physical switches are designed to interpret the many possible minutely-different physical positions of the switch lever as being in one of two meaningful position-sets. Of course you’re going to deny that that’s interpretation because your definition of “interpretation” includes the conclusion-assuming detail that only humans can interpret things, but that doesn’t change what is mechanically going on.
I have no idea what hallucinatory nonsense you’re going on with the puddle/sun thing. I mean, I know that you had previously pointed out that a puddle could randomly manifest an arrangement that for a brief moment could be interpreted as calculating any specific function - the ripples on the surface could form a reflection that looks just like a printed lookup table, for example. But everybody knows that a brain is continuously maintaining the processes and behaviors it takes to make up a mind, and I’m afraid I’m unable to imagine what twisted abortion of logic you must be following to imagine that unfixed randomity is likelier to maintain such processes than a stable system that actually implements them.