Downloading Your Consciousness Just Before Death.

I’m thinking about the structure of this turing table lookup. The input in a Turing test is a series of comments entered by the human, alternating with responses by the computer. Necessarily this means that there is not one input, but many, and so the implementation must be a branching lookup table - each node of the tree has a branch to react to every possible human comment, and at the end of each branch is another branching with a response for every possible human comment that could have followed the first. The response to a later sentence can be different depending on what the earlier statement was - even if the computer’s responses to the statements in between are the same.

H: Do you like computers?
C: Sure, I guess.
H: Have you ever seen one?
C: Yes, of course.
H: Are you one?
C: No, of course not.

H: Do you like humans?
C: Sure, I guess.
H: Have you ever seen one?
C: Yes, of course.
H: Are you one?
C: Yes, of course.

So as the conversation continues, the computer necessarily needs to keep track of which winding sub-branch of its lookup tree it’s on. It could be as easy as setting a memory pointer to the address of the next branching, but that is still a record of the computer’s current state that must be being kept - and that record implicitly includes the entire previous conversation and what the computer’s reactions to it were within it.

If the computer finds itself at a node where all its responses are tinged with anger, it is entirely reasonable and natural to describe that node as an angry one - while the computer is at this node it is ‘angry’ and will make ‘angry’ responses, based on the conversation that has gone before putting the computer into this ‘angry’ state.

Continuing from my prior post, it’s interesting to note that a Turing Lookup Table Tree would also end up storing information too!

H: Who was the first president?
C: George Washington.
H: Okay, who was the fifteenth president?
C: Er, I don’t know that one.
H: Turns out it was James Buchanan.
C: Huh. Okay.
…later…
H: Hey, memory check! Who was the fifteenth president?
C: I don’t, wait, I remember that one. James Buchanan!

When the human gives the computer information, all the nodes beneath that one will have responses that account for the fact that the computer was given the information. Where is the fact that the computer knows it stored? Come to that, where is the fact that the computer knows about George Washington stored? No particular place! It’s just an implicit fact that at each node of the table, the node’s branches have responses that one would expect from a person with a particular set of knowledge, and what that knowledge set contains varies from node to node corresponding to what was said on earlier nodes of the conversation. There’s no obvious place to check to see what the computer knows, but the knowledge is implicitly stored in the table all the same. In upteen trillions of trillions of trillions of places, even - it’s impossible to understate the combinatorial explosion that you get when there’s a branching for every possible thing the human could say (including gibberish!), taken to the nth power where n is the number of lines spoken before one side or the other decides to quit!

I still haven’t read the paper beyond just skimming it, but to clarify the concept here AIUI, the lookup-table program always just does simple table lookups to fetch rote responses, and that’s supposed to be the whole point of that silly argument. But that’s part of the absurdity of the whole concept in terms of anything that could possibly have a physical realization, unless it’s completely trivial. One simply requires that the lookup-table program records the statement/response pairs so that at any point in time the next statement is appened to the sequence, and the entirety of it then constitutes the next lookup query to the table. It’s not just individual statement/response pairs that are captured in the table, but all branches of the entire conversation tree and hence effectively all state information from the original computation.

This in fact is the very reason that we can conclude that, in the context of the natural language conversation, the table encodes mental states, so that the table-lookup system in fact possesses mental states (although the obviously trivial lookup program itself does not). That’s also why the paper discussing the “humongous-table” (HT) concept posits a time limit on the Turing test, or else the size of the HT would have to be infinite instead of being finite but nonetheless probably many times the information capacity of the entire universe.

Well, not really; Turing rejected that question as meaningless, and proposed to instead answer another he thought sufficiently close, but expressible in unambiguous terms.

I think a more fruitful line towards attacking the Turing test, while we’re on the subject, would be to hold that intelligence can’t be established in a fixed-length test, exactly because such a test can always be gamed (think about, for instance, intelligence tests: there’s a finite number of those, so you could program a computer with all the answers to the tests it’s going to get; but I don’t think anybody would hold that the computer is thus intelligent, and it would fail as soon as a problem outside its narrow domain comes up).

In that case, no lookup-table program could be considered intelligent, because it would fail an open-ended test eventually; however, there are programs that could conceivably pass such a test. Of course, the problem with this is that you can’t really perform open-ended tests, as you might run out of universe before coming to any definite conclusion. But then, that just means that intelligence can not be established with certainty by any finite procedure in every case—but this should not surprise us: by Rice’s theorem, no non-trivial property of a computable function can be decided.

So, in principle, there is no general procedure for proving for any program whether that program computes the decimal expansion of pi, for instance. Why would we then expect for there to be a general procedure to decide whether it’s intelligent? Doesn’t mean we can’t answer the question for all practical purposes (and here it’s correct to note that a humongous lookup table isn’t practical).

This doesn’t have any bearing on my argument, though, which, if correct, establishes its conclusion with certainty (hence my repeated urging that if you want to find fault with my position, it’s there you need to start). The Turing test, however, is at best an appeal to intuition—even if it’s a strong intuition. So if it’s true that my argument implies that the Turing test has no value (which I don’t really think), then that just means that this intuition was wrong (absent, of course, any flaw in my argument).

I’m really no longer sure if you’re not just joking here. The argument is circular exactly because of the logical equivalence of both statements:

The logical form of circular reasoning is:

IOW, exactly what you’re claiming. Again, this is perfectly valid, but fails to prove anything beyond vacuities. As noted by RationalWiki,

Or take the article The Problem of Circularity in Evidence, Argument, and Explanation:

Because A and B are logically equivalent, A –> B is merely A –> A, which, again, is valid, but vacuous. The system computes if the system computes—yes, but that doesn’t tell us anything about whether it computes.

The crucial point still remains that if it’s necessary for there to be an appropriate interpretation in order for there to be a computation, then that computation can’t itself provide that interpretation, since absent an interpretation, there is no computation to supply that interpretation. I don’t know if I can explain it further to you if you don’t get that by now.

Sure. Again, from RationalWiki:

Showing that a position entails an infinite regress is a common way of showing that position to be inconsistent.

Exactly. The argument you gave, likewise, doesn’t establish whether a system computes, in the same way—it computes, if it computes.

I know you’re asserting that, but, as the rest of your post shows, you’re really just rehashing the same old tired and long-debunked misunderstandings you’ve been presenting from the beginning.

I’ve given you the definition. And it’s not like I’m inventing new terminology here—this is just the terminology in which this discussion is usually framed. Take the Stanford Encyclopedia article once again:

If they’re interpreted as computing something (I assume that the definition of computing is still in your bookmarks), then yes.

As noted, that’s not correct.

This is a standard objection to computationalism: the threat of pancomputationalism. It’s been discussed in this thread several times.

Again, that’s not how interpretation works. An interpretation is the association of semantic content to symbolic entities; a blank piece of paper has none.

That’s not the analogy I was making, though; rather, the analogy is between interpretation and the use of one-time pads.

And claiming that computation merely comes down to behavior in this way reduces computationalism to behaviorism, and thus, trivializes it. This has been discussed many times now.

The fact that this sort of argument is one of the most wide-spread in the literature on computationalism simply demonstrates that your blasé denial stems from nothing but a profound ignorance of the debate, and lack of understanding of the argument.

This, of course, just assumes the conclusion that computation is objectively associated with a given computational system, so the argument is worthless. I’ve given an explicit example demonstrating that that’s not the case, which you’ve done nothing to dispel.

I don’t assert this; I’ve demonstrated it, by showing how the computation performed by a system differs if it is differently interpreted.

I like how, the less logical force your arguments have, the more rhetorical force you put behind your words. But total persuasiveness doesn’t really stay constant by that: it’s ultimately argument, not how often you can call something ‘retarded’, that’s decisive here. So the more you throw around invective like that, the more you seem to be aimlessly flailing around in a helpless, impotent fit.

I have given an explicit specification of how the interior of the box works. Seriously, I know I can’t expect you to understand my arguments, but if you can’t even recall things that have been explicitly presented numerous times, then I don’t think this has a lot of value anymore.

You, on the other hand, have continued to blithely assert that computers can do interpretation, because that’s, like, obvious, while failing to point to even a single example, and hinging your case on a blatantly circular argument.

I have repeatedly told you that I’m not assuming anything about human minds. Again, my argument is a simple demonstration that I can interpret a system to compute different functions; that’s it. That nowhere implies, or depends on, humans being the only sorts of entities capable of doing this; it does, however, show that computers can’t do it, because we otherwise lapse into an infinite regress. Really, it couldn’t be simpler: you’re trying to explain a capacity in terms of that capacity, which leads to logical garbage.

This has also been dealt with. If you claim that consciousness only exists if the system performs the computation over a suitable time period, then you’re stuck with the absurd consequence that whether a system is conscious now depends on whether it’s conscious a milisecond from now, or whether it’s blown to bits by a nuke.


Anyway. Since nothing seems to be happening beyond the same old, tired misunderstandings being dragged up again and again, and in particular, since nobody seems to be able to follow up their claims with an actual argument capable of meeting my challenge—to implement the computation f uniquely—I think this has become nothing but a waste of time. If any new arguments crop up, I might reconsider, but otherwise, I don’t think I’ll be bothering smacking the same tired assertions and false claims down again and again.

Fair enough, and no one can argue that you didn’t spend a lot of time patiently expounding your views and informatively citing the reasons for them, and I for one appreciate it and thank you for it. I have some objections to your latest replies but we’ll put that aside for now.

However, in the larger picture I reject the dismissive view that no good counterarguments were presented, or that this particular challenge of yours was decisive. In fact, even aside from the counterarguments that were presented here, the last two papers alone – the Chalmers one and the one on the Turing test via lookup table (which in retrospect was a good deal more pertinent to this discussion than I had first thought) – directly contradict your positions on the meaning of computation and on the relevance of the lookup table argument, respectively. As an aside, one pertinent insight from that second paper is that it shows that the theoretical lookup table preserves not only input-output mappings but also any relevant state information.

So my parting word on the challenge of “implementing f uniquely” is that it’s meaningless question-begging because it presupposes that computation is inherently semantic (that the symbols must have inherent meaning) while my argument has always been that computation is entirely syntactic, and that semantics may emerge as a consequence of the computation, but is not an intrinsic property of it and cannot enter into a formal description of it, as you have done by demanding a machine that computes f but not f’. It’s inherently meaningless to challenge someone to build something uniquely distinguished by some property that the thing cannot have. I can’t build a chair that is happy or sad, because emotions are not a property of furniture (although looking at my office chair might make me sad!). Granted, this is not a universal view of computation, but it’s a common and perfectly legitimate one and is not just logically supportable but, in my view, has fewer conflicts with other paradigms in computer science and theories of cognition.

To make that clearer to all, if you want me to build you something to compute f, I know exactly how to do it, because the computation is well defined. I build a one-bit binary adder, for which I need just one exclusive-OR gate and one AND gate. The adder has three inputs: A, B, and C[sub]in[/sub] (an input carry bit) and it has two outputs, a sum S and C[sub]out[/sub]. I then have the adder move sequentially through all the bits from right to left repeating the process. I’ve implemented f.

I then hand you the machine and let you compute away to your heart’s content, and if you want to put fanciful interpretations on the output, like f’, knock yourself out. The computation is in the logic gates, not in your interpretation. One advantage of this view is that it doesn’t imply that the foundational computational underpinnings of cognitive science are, in their entirety, based on some kind of fraudulent misconception. Nor that all AI is reducible to a semantic-free lookup table, or that the Turing test is worthless, all of which you’ve either explicitly claimed or which arise directly from your premise.

I have realized that, broadly speaking, the irrelevancy of your position is worse because I already showed that computational science labs are even looking at semantics. And baby steps are being done when computers are learning to play other games and to do so they have to learn and modify their replies constantly. Again, It would be easy to show to those guys that they are not doing useful things, or hitting a wall and that they should give up already. But they seem to be happy to show many that Eppur si muove.

Well, look at me jumping back in the moment somebody presents the same old arguments again… But I don’t think you’re really representing the state of this discussion quite fairly.

I have explained why the Chalmers paper doesn’t apply: my trivialization argument explicitly takes his counterfactual constraints into account, and still goes through. As for the lookup table, I can only reiterate—if it entails that believing that a given program has mental states entails a belief that an equivalent lookup table likewise has those states, then well, that’s just a reductio of the original belief.

I do think that the idea that intelligence must be open-ended has some merit, though; the notions ‘behaves intelligently’ and ‘behaves intelligently for an hour (or however long)’ clearly diverge, and it’s possible that genuine intelligence is only associated with the former, in which case lookup tables don’t qualify as intelligent. Now that I think of it, I wonder why I’ve never seen the notion in the literature… (As an interesting, but only tangentially related point that I wanted to mention earlier, the only well-defined notion of general intelligence that I know of, as a kind of ‘universal problem solver’, ends up depending on Solomonoff induction—which makes it uncomputable. Although there’s a hope to find sufficient approximations that may be computable.)

I would just like you to at least acknowledge that I’ve provided a counterargument to that: namely, that the human mind evidently is capable of singling out f uniquely, and if it does so computationally (as the computationalist must hold), then it’s possibly to implement f uniquely by computable means. This does in no way depend on a semantic notion of computation, but merely on the idea that whatever the mind does, is done computationally.

Since it’s not possible to implement f in this way uniquely—any system purporting to implement f in this way will be subject to just the same argument I presented initially—, even appealing to ‘emergence’ doesn’t get you out of the quagmire. (The issue that nobody has any sort of clue of how this ‘emergence’ is supposed to happen notwithstanding, of course.)

Furthermore, the argument also establishes that ‘more computation’ doesn’t help: it will only increase the options for ambiguity. If you add further systems in an effort to get them to interpret the states of my box—to get, somehow, some glimmer of semantics to ‘emerge’—then the problem will just as well apply to these systems, and the total ambiguity will increase combinatorially, rather than decrease, with further complexity. Adding more map doesn’t help; it’s never going to become territory.

Besides, appealing to emergence fails to solve the problem of objectivity: no matter what, if my argument is correct, then computation isn’t an objective property of a system (unless you want to again appeal to the trivializing notion of computation that’s really just the behavior of the system). But whether a system is conscious is an objective property.

If I interpret that straightforwardly, it’s a contradiction: you claim to have implemented f, acknowledge that I can use a different interpretation corresponding to f’, but claim that that’s irrelevant because the computation is ‘in the logic gates’. But if that’s the case, then the same holds for f, and you have in fact not implemented f.

I’ve never implied any fraudulence at all. I merely claim that cognitive science isn’t immune to what’s happened to literally every scientific paradigm so far, namely, that it will be supplanted by something more accurate. Science progresses by re-examining its foundational assumptions, and revising them if necessary; insisting that what we think now has some special claim to truth is antithetical to scientific progress.

Anyway. None of the above is new; everything has already been presented in the discussion. But maybe it helps a little to bring it back into memory.

I’m going to spare you any replies to your other points (other than a matter of terminology at the end) because I agree it would indeed be repetitive, but since you ask that I at least acknowledge this one, let me address that.

I acknowledge that you’ve provided a counterargument, but I maintain it’s one that’s easily refuted because it’s not true. On one hand, obviously the human mind is open to all possible interpretations of the outputs – f, f’, and an infinity of others, since you already made two interpretations yourself and I showed how it could be extended to an infinity of other interpretations. We can cheerfully make any of them. So what exactly do you mean by “singling out f uniquely?” Let me try to answer that rhetorical question.

The “unique interpretation” that you’ve been obsessing over comes about from purposefully constraining the interpretation to a particular purpose, such as binary addition. That this purposeful constraint can be done computationally is dramatically evident by the fact that such a constrained interpretation of a binary adder is necessary if it’s part of, say, a large robot’s visual mapping system, where the f interpretation within its visual mapping subsystem sends it neatly through a doorway whereas the f’ interpretation sends it smashing through a wall. If it finds its way through the door instead of wrecking our lab, we can say that it’s making a unique interpretation of f, and this is exactly the sense in which the human mind chooses the one that happens to be useful (or IOW, “correct” in a particular context). This is objective – we either have a giant hole in our wall or we don’t – and it can be observed without introducing any combinatorially increasing ambiguity about “interpretation”.

There may well be circumstances on a different day when something like your box is used to compute f’, because that happens to be a useful function for some different purpose, and that straightforward fact shouldn’t give any philosopher the vapors. To make an imperfect analogy, it’s not much different than the way that sliding around the sticks on an old-fashioned slide rule means entirely different things depending on which of the function scales you were looking at. The slide rule didn’t care, but it did give you useful answers. The entire argument strikes me as vacuous.

Meh, I was using “fraudulent” in the loose derogatory sense of “bogus”. Of course the progress of science occasionally requires the revision of what were thought to be foundational assumptions. However, as I’ve said before in climate change debates in response to claims like “science has been wrong before”, sure, but it’s actually very rare that truly foundational underpinnings have to be overturned, and on the rare occasions when it does happen, it’s supported by a corresponding body of solid empirical evidence. Honestly, I really don’t think you appreciate how absolutely central the computational view is to cognitive science today. It’s somewhat akin to the idea that yes, CO2 really is a greenhouse gas responsible for post-industrial warming.

The thing is, while I’m computing f, I’m not computing f’; the symbols of the box have a particular interpretation for me, and only that one.

No. Just as the behavior of the box isn’t changed whether it computes f or f’, so the behavior of the robot won’t be changed (that is rather the point of the whole thing). (And if you’re again intending to couple the behavior to the computation uniquely, then—trivialization of computationalism to behaviorism, behaviorism long since refuted, yadda yadda.)

I’d like to preface this by saying I completely respect any decisions you make not to reply. You do not have to convince me that you’re right. I do not have to convince you that I’m right (and I don’t expect to). I’m just laying out my perspective on the debacle that lies before us.

That’s not what I’m claiming, of course. It’s actually pretty apparent that you are not parsing me correctly, for one reason or another.

When did you establish that a computation is necessary for there to be interpretation?

(Okay, I mean, you’ve actually explicitly established that a computation is not required to do interpretation, since you assert that human brains aren’t computational and you assert that human brains are capable of doing interpretation. But that doesn’t count because special pleading/brains are magic.)

Just for fun, I’m going to pretend that you’ve have in fact explicitly established that a computation is not required to do interpretation. What does that do to your argument?

Well, the first thing it does is open it up that the initial interpretation doesn’t need a calculation to do the interpreting, it just requires something do to it. For example P. P is established as existing in the argument, so P could supply the interpretation.

This leaves us with this possible scenario:

P interprets its behaviors as being computation via some straightforward mechanical interpretation method I.
P is therefore computing. Call the computation C.
C, which exists, may (or may not) also examine its own workings, and join in in the interpretation that is going on, with interpretation I2.
Additionally, the computation may (or may not) produce outputs, which a human (H) can observe, and then also join in the interpretation party, with interpretation I3.

It’s worth noting that I, I2, and I3 need not be the same interpretation, and indeed probably aren’t. I, C, and H all have different perspectives on what is going on within the box, and given the different information they’re receiving it’s probable that they will be drawing different conclusions.

It’s also worth noting that thanks to their differing interpretations, I, C, and H might disagree about what the computation is. I think that you believe that that means that there are three separate computations that are going on within the box, but I’m not certain your definitions support that. In any case, H’s interpretation doesn’t invalidate C’s (or I’s), regardless of how much your argument might wish it did.

You haven’t show that there’s an infinite regress.

If you’re talking about where I wrote the two statements A ⇒ B and B ⇒ A one after the other, that wasn’t an argument. I mean, obviously: there was no conclusion.

You seem to think it’s an argument, but you’re wrong. You have taken a class in formal symbolic logic, right? (Haven’t we all?)

As for whether I have or haven’t made an argument about whether “a system” computes, it probably depends on which particular system you’re talking about. If you’re talking about anything complex, I have volumes of personal experience that tells me that both at the hardware and software levels these things aren’t actually a single “computational device” at all, but many of them strung together, and that each of them interprets the output of the other, and/or the output of inputs.

So it’s not “C is interpreted as doing a calculation by H”. It’s “C1 produces output interpreted by C2 which is interpreted by C3 which is … which is interpreted by C237 which is interpreted by C238 which is interpreted by H”.

I mean, no human interprets the output of a computer’s processor. They interpret the output that the monitor displays based on how it interprets the output of the video card which is a result of the the video card’s computation’s interpretation of what the processor is outputting. We all know this; we all know that there’s many, many layers of interpretation going in. (Unless admitting that destroys the argument a person is making, I suppose.)

What does this mean for your argument? Well, it means that your desired causality runs backwards. H interprets C238, which ‘causes’ C237 to start being a computational device, which ‘brings it to life’ and makes it capable of interpreting C236, which causes that process to be transformed into a computational device, and so on and so on all the way back to C1.

Of course C1 shut down ages ago. It’s no longer even running; it did its thing, produced its output, and then ended; done and gone. Because in reality, outside your fallacious insertion of backwards causality, any causality that exists runs forward.

So, yeah.

Yeah, I’m pretty sure these guys aren’t making the same arguments you are. Among other things one would hope their definitions are stable.

These two statements are contradictory - in the first “[being] interpreted as computing something” is necessary and sufficient to be considered a computational device, and in the second you’re refusing to accept that a printed lookup table computes something, despite it definitely being possible to interpret is as outputting computed information.

Seriously, if we take your special pleading seriously, then when you look at the output of a computational device through video camera it’s no longer a computational device, because you’re not seeing the output, you’re seeing an transmitted image of the output. And that’s stored data, not computation.

Don’t be absurd; a blank piece of paper is a rectangular sheet of paper in a solid color (usually off-white), which can definitely be interpreted as having meaning. (Example interpretation: “This is telling me I need to replace my ink cartridge.”)

You’re only pretending otherwise because lookup tables and blank paper underscore how crappy your not-really-defined definition of “computational device” is.

I’ve dispelled it by demonstrating that your explicit example doesn’t support the conclusions you draw from it. You’ve done quite a bit to attempt to dispel my arguments on this subject; unfortunately you’ve been less than convincing.

Your assertion that multiple interpretability of a system is somehow a problem for the system is a false assertion.

Not only that, but it’s obviously false. By your argument the fact that you interpret your argument as being great and that I interpret your argument as being crap should demonstrate that, because your argument is coming out of you, that your brain doesn’t function. As in, poof - disappeared in a puff of logic.

Multiple people interpreting the output of your brain differently - Ok.
Multiple people interpreting the output of your box differently - Somehow bad.
Special pleading.

And no, there’s nothing about your “multiple interpetablilty is a problem” assertion that makes it exclusive to computational devices. Your special pleading is, in fact, special pleading.

Oh, you’re still hewing to that specific implementation of your box? I thought we’ve moved back to the general form of your argument.

That specific interpretation of the box is still interpreting the physical positions of its switches, obviously, but no part of the calculation is observing and interpreting the signals it’s sending to the lights (except the lights themselves of course, and they’re each only getting a small part of the picture). So it’s not bothering to interpret its output signal as f or f’ or anything like that. It’s too busy paying attention to the switches.

Failing to point out a single example? Remember when I mentioned those things called interpreters? Yep. That happened.

And you accusing me of forgetting your arguments. :dubious::rolleyes:

You’ve repeatedly shifted around the all your definitions like the pea in a shell game. And your argument is fallacious and doesn’t prove anything.

Have a nice day.

No!! Obviously, the binary adder is going to produce exactly the same bit string outputs from the same bit string inputs regardless of interpretation, which was my point, and also underlies the point that f and f’ are computationally equivalent. There’s no argument there.

But central to your point about f’ being different in the first place is that f and f’ are not equivalent at all but represent different functions, as you were at pains to point out in #93. The only way this is meaningful, given the obvious fact above, is that some part of the robot’s image processing software (that is, the interpretive layer, not the hardware) is interpreting positional bit values, and doing so in different ways. The function f is conveniently already in hardware, but let’s ignore that for the moment, and level the playing field by assuming that there’s a subroutine that produces either f or f’ results. Given the inputs, say, (2,0), f’ returns the bizarro result 4, whereas of course the binary addition version f will return 2. Hence, if the environment mapping software was expecting the subroutine to return f results, and your bizarro-world bit-position interpretation returns f’ instead, the calculations underlying the robot’s image map go to hell, and the robot goes through the wall, causing untold amounts of damage to the AI lab.

Thus, the robot’s behavior absolutely does change, precisely from the manner that you’ve chosen to differentiate the two functions, even though the computation underlying them is identical and the bit string inputs and outputs are identical. And only by constraining the software to the f interpretation uniquely does the robot work as intended.

**HMHW **, I was reading some of what Picvinini has written and he has some arguments supporting computation without semantics and, if I remember correctly, without interpretation. I’m on my phone so not as efficient with posting or including links but I assume you are already familiar with his arguments. What is your opinion about his position on this topic?

I haven’t been trying to; these discussions never seem to lead to something like that. I’d hoped to at least be able to explain my position to you, but even that’s not looking good, seeing as how you still are fundamentally confused about very basic notions I’ve explained over and over.

I didn’t; I don’t believe it is, in fact, the argument obviously shows that it’s not—whatever brains do when they interpret things isn’t computational. You, however, claim that anything that a brain does must be computational; then, so must interpretation be, seeing how it’s something brains can do. Consequently, whenever there’s an interpretation, there’s a computation that produces that interpretation.

Of course, that’s where the infinite regress comes in: because in order for there to be that computation, there needs to be another interpretation giving rise to it, and so on. This is a strict logical consequence from the fact that interpretation is necessary for there to be a computation at all, and interpretation being done via computation (i. e. the computationalist hypothesis); your ‘nuh-uh, it’s not’ isn’t really a convincing argument.

The definitions straightforwardly says so: all three will associate different computable functions to P, which therefore implements different computations.

Why would I wish that? It’s precisely the fact that all interpretations are on equal ground that’s the problematic bit (for computationalism, that is), since it means that there’s no objective fact of the matter regarding what computation a system instantiates—in opposition to the fact that there’s an objective fact of the matter regarding whether it’s conscious.

See above. The regress is a straightforward consequence from an interpretation being necessary for there to be a computation, and (on computationalism) computation being necessary for the interpretation. That means that each further computation needs to be interpreted by a further interpretation, and so on, ad infinitum. There are only two ways out of this: claiming that some computations don’t require interpretation (which is the challenge of showing a computation being uniquely implemented by a system), or claiming that some interpretations don’t require computation (which is the reasonable thing to do, since there’s no reason at all to believe that they should).

I’m glad you understand that now. But let me remind you that you very much intended to use it as an argument, establishing that P could use C to implement I, making it compute C. Notably, you’ve repeatedly made claims like:

Thus, you claim that a system could simply interpret itself as computing. But that’s exactly the fallacious deduction of ‘P implements C’; you’re proposing:
P implements C because there is an appropriate interpretation I of P.

And you’re furthermore claiming that P by virtue of implementing C provides that interpretation (‘observes itself’), that is:

There is an appropriate interpretation I of P because* P* implements C.

Or, take you bicycle chain-example.

You offer this up as a support for the idea that such a circular dependence could work, that the movement of the upper half gives rise to the movement of the lower half, and vice versa. Thus (or so you want to infer), the same circular dependence is likewise a-OK in the case of the computation giving rise to the interpretation that gives rise to the computation: the system computes by observing itself. But, just like the physics is wrong in the bicycle chain, so the overall logic simply doesn’t work.

As pointed out many times now, it’s not a causal relationship. And of course, this is a straightforward consequence of my argument—as I pointed out myself:

When I interpret the stored values as referencing either f or f’, I’m interpreting the computation the box performed, even if the box has long since ceased to exist. And that’s absolutely no problem: after all, it’s not an objective property of the box, so whether I now interpret it as having the property (of implementing f), or whether I’m interpreting it as having had that property, makes no difference whatsoever.

My definitions—as repeatedly pointed out—have remained exactly stable since the first post I made in this thread. And yes, they are making the same argument as I am—it’s the class of triviality arguments referenced again and again.

It is interpreting something as performing a computation—namely, the box, whose values where merely stored on the paper. Honestly, if I write down ‘2 * 2 = 4’, are you going to claim that the paper on which it’s been written has calculated that? No, of course not; I calculated it, and then wrote it down. The paper can be considered an extension of the output of the box, if you will—the lamp states are merely translated into signs on paper. If I take the box, you also wouldn’t consider the lamps to have computed the function, would you? The lamps merely show the result. As does the piece of paper. What result, of course, is again a question of interpretation.

And via that stored data—which is just the output the computation produced, no matter whether it’s interpreted instantaneously or a hundred years later—I interpret the computation.

Say somebody gives you a paper, on which is written ‘have a nice day’. Do you, astonished at this miracle, conclude that the paper has just wished you a nice day? No, of course not. It’s the person who wrote down those words that did so.

Sure. But for any consistent interpretation of a system as implementing any nontrivial function, you’re going to need something with more states; in my interpretations, clearly, each logical state is mapped to a physical state of the system (a switch being up, a lamp being on). But the paper only has one physical state (unless you manipulate it in some way; then, of course it can be used to compute, as can soap bubbles). So there’s no map of the kind I’ve been using throughout that takes it to any non-trivial computation.

I’ve nowhere asserted that it’s a problem for the system. It just means that computation is not an objective property of the system, unlike consciousness, and hence, that it doesn’t serve as a foundation for the latter.

Really, again? Whether an argument is true or false isn’t a matter of interpretation; if we interpret this differently, then that just means that one of us is wrong. There’s an objective fact of the matter here.

No, still not, I’m afraid. It would only be a problem (as I also have pointed out dozens of times by now—but oh well) if the brain did computation. If I claimed that the brain computes, but it’s a sort of computation that doesn’t need an interpretation in order to be definite, then yeah, that would be special pleading. But I’m not; indeed, the very point of the whole exercise is to demonstrate that the brain can’t do what it does by computation (alone). So, only systems that do what they do by computation are subject to interpretational issues; hence, the brain isn’t. And again, I’m not just assuming that—it follows from the simple demonstration that yes, I can interpret the box as implementing f. Which means computation is interpretation-relative; which means computation can’t supply interpretations (regress); which means (since brains can supply interpretations) brains do something that can’t be done computationally.

You’re right, it’s not only applicable to computational devices, but to everything whose function is interpretation-relative; computational devices just form an example of that class (which may, or may not, be bigger). But what we do know is that the brain can’t be doing anything interpretation-relative (as otherwise, regress!).

In what sense is it ‘interpreting’ the position of the switches? That position physically causes it to do something, but obviously, physical causality isn’t interpretation (the stone, upon being kicked, does not interpret that as a signal to trace out a certain parabolic arc; it’s simply flung up).

And once more, and as futilely as the dozen times before, you’re more than welcome to supply a system that does perform the interpretation there.

F

Yes, indeed. As you always do, you made an assertion that you didn’t bother justifying. If some system implements a piece of software known as an ‘interpreter’, then, in order to do so, it must be the case that that system is appropriately interpreted to implement said ‘interpreter’. What supplies that interpretation? It can’t be the system itself: that leads to the circular nonsense. It can’t be another computational system: that would itself need to be interpreted.

My definitions (as should be amply illustrated by the numerous times I had to quote them back to you verbatim) have stayed constant throughout—even though your interpretation of them may have changed.

Wait, what? I guarantee I’ve never said anything like “Brains do things via computation, therefore everything that a brain does can only done by computational methods.” That is amazingly fallacious.

Are you arguing against a strawman - an imagined image that you see when you look at my posts? It would explain a lot.

I’m getting fucking tired of you lying and pretending I haven’t made arguments. Yes, I get that your confirmation bias is literally preventing you from remembering things that prove you wrong, but fuck is it annoying.

Here’s some more stuff for you to fucking “forget”:

First thing: by your definition of it, computation is not a description of behavior. You’re very explicit about this - an object can be doing exactly the same thing, without change in behavior, and start or stop doing computations based on who’s glancing at it.

In your world, “computing” is a term akin to “being looked at”.

On the other hand, whether an object is interpreting something is a behavior.

This means that a thing can definitely do interpretation irrespective of whether it’s being looked at/interpreted. Which means that things can definitely do interpretation even if they’re not “computing”.

Which means that things can start operating, which produces some outputs, and and then the processes can start interpreting their outputs themselves. At this point it seems they will earn the label “computational device”, but having that label won’t change anything about what they’re doing.

There is no infinite regress.

You do realize that you just conceded that machine interpretations are valid and that computations can be self-interpreting without infinite regress, right? That’s what’s happening in this example.

Person A looks at a picture and sees two faces.
Person B looks at the picture and says “Huh, a vase”.
A: “I see two faces, actually.”
B: “The picture doesn’t have two faces, because I don’t see it that way.”
A: “Just because you don’t see them doesn’t mean they’re not there.”
B: “Actually the fact that I see them means you don’t see them either, because if they’re not objectively seen by all, they’re not there at all!”
A: “What?”
B: “Didn’t you know? If something is objectively real it’s impossible to interpret it in any other way. It’s literally impossible to draw different conclusions from The Truth.”
A: “What?”
B: “It’s true! If I look at a sleeping person and think he’s dead, that means he’s dead, because if he had consciousness it would be impossible to interpret him as anything other than a conscious being.”
A: “What? No way, that’s crazy.”
B: “Also, I don’t think you’re conscious either! You’re just a philosophical zombie!”
A: “What? No! I am too conscious!”
B: “Just what a philosophical zombie would say!”
If C is experiencing consciousness, and H disagrees and says that C is just a paperweight, that variable interpretability of C’s observable behavior doesn’t mean that C’s consciousness isn’t objectively occurring; it just means that H isn’t in a position to notice it. And when H (or M, or H, or W) says that their ability to see something as not being consciousness means that the consciousness isn’t objectively occurring, then they’re just wrong.

You clearly have no idea what I’ve intended.

Wow, you are all kinds of not getting it.

A⇒B and B⇒A alone are not an argument, because an argument is not an argument until something’s been inferred from the premises. But guess what you could inferred from A⇒B and B⇒A: A⇔B. Guess what you can’t infer from them: an infinite regress.

But honestly that’s all beside the point, because we’ve now unearthed that the whole “interpretation because computation” thing was actually a strawman derived from you not getting that just because computation can result in interpretation, that there’s nothing saying that computers need to have a foreign agent observe them to start observing and interpreting things themselves.

Yeppers, a computer can start interpreting things before it starts computing - though if the thing it’s interpreting is its own previously stored outputs, then it also (by definition) starts computing the instant it starts interpreting.

The interpretation and computation start at exactly the same time, because the conditions for those terms to apply are both met at exactly the same time. All without infinite regress or any circularity whatsoever. Just like I’ve been saying the whole time.

You may be as tired as you wish, but this sort of comment needs to be poster in The BBQ Pit. Do not accuse other posters of lying in Great Debates. If they have posted something that contradicts or denies a post you have made, then quote the error and quote the fact. Do not accuse others of lying.

[ /Moderating ]

It occurs to me that there are areas of potential misunderstanding here that I should clarify.

First of all, the distinction I draw here between hardware and software is just based on the fact that ordinary binary arithmetic is of course conventionally a property of hardware, so a bizarro-world interpretation like the function f’ implies a software interpretation. There is obviously no fundamental distinction between computations performed by hardware, firmware, or software.

Secondly, some clarification is in order about the practical difference between the functions f and f’ in this context. If one imagines that for some reason the hardware lacks a binary addition operation and a programmatic function has to be provided for this, it’s obvious that the programmatic functions F and F’ will return exactly the same binary results for the same inputs (back to my “computational equivalence”), which I think is what HMHW was alluding to earlier. But that’s not the point here.

The point, of course (going back to the definition of the functions in post #93) is that the bit reversal creates a completely new function, f’, which overlays a bizarro-world interpretation on a binary adder and consequently creates a new set of input pairs and outputs that doesn’t represent addition at all. And that’s perfectly fine, but in order to maintain this distinction, the real-world function F’, as an actual programmatic function, must actually conform to the definition in post #93. It must impose the required interpretation. That is, we don’t care that both functions take binary 00 and 01 (for instance) and identically return the binary string 001; we care that F takes (0,2) and returns 2, while F’ takes (0,2) and returns 4, exactly as described in that post’s function definition, which was the whole point of it.

So if the robot’s logic depends on conventional binary arithmetic, implementing F uniquely is critical. The difference between F and F’ is that the former might define a benign robot that gracefully exits through the doorway, while the latter loses its way, smashes through the wall, and attacks you with a can opener.

Then I’m afraid you still haven’t understood the argument. The box doesn’t produce bit-string outputs; it produces lamp patterns. How these are interpretated—which lamp states are mapped to what bit value—then determines what computation is being performed. The robot’s behavior is the lamps lighting up in response to the switches being flipped (which correspond to sensory input, say).

So f is not in hardware any more than f’ does; all the hardware does is light up lamps in response to switch flips. What we consider those lamp-states and switch-flips to represent is what determines what’s being computed.

The crucial point is that in the state that the box is interpreted as (2, 0) –> 2 under f, it will be interpreted as (1, 0) –> 2 under f’. That state is (up, down, down, down), and the resulting lamp state is (off, on, off). So the physical state (up, down, down, down) is interpreted as representing the tuple of numbers (2,0) under f, and (1, 0) under f’; the lamp state (off, on, off) is (coincidentally) interpreted to mean the same thing under both. This is nothing different from interpreting ‘gift’ to mean ‘present’ or, if you’re German, ‘poison’—the same symbol having different meanings.

Neither is any more right than the other; both yield the same physical behavior (the same lamps lighting up in response to switches being flipped). Consequently, likewise, the robot’s behavior will not favor any interpretation, because that’s how these mappings are designed—to take physically identical behavior to distinct computable functions and hence, show that what function is implemented is not implied by the physical properties of a system.

This again just misunderstands things. The environment mapping software is itself interpretation-relative; what it ‘expects’, in as much as that’s a good way of talking about it, will thus depend on how it’s interpreted. More computation doesn’t make the interpretation any more definite!

Take begbert2’s attempted example of a further system interpreting the output of my box. We can think of it as another light (perhaps on a different box), that’s supposed to come on whenever the lamps on my box represent an even number. Say you implement it such that, whenever the output of the box is even under the interpretation yielding f, it comes on—that is, whenever the third lamp is off.

But you can easily apply a different interpretation. If you consider ‘lamp on’ to signal ‘0’, and ‘lamp off’ to signal ‘1’, so that even and odd numbers are reversed, does that now mean you’ve made an error, since the even/odd detector will then fire on what you now consider odd numbers? No: it just means that the interpretation of that additional light must change accordingly. Now, ‘lamp on’ there denotes ‘odd’, and ‘lamp off’ denotes ‘on’.

That’s exactly the point of how more computation doesn’t help: it will be just as interpretation-relative, and, in fact, just open up the door to more ambiguity.

Now consider the additional lamp to guide the robots behavior. ‘Light on’ will cause it to ‘turn left’, and ‘light off’ will cause it to ‘turn right’. The lab door is to its right, the wall to its left. Does a change in the interpretation now result in a change of behavior? No, of course not; the behavior will stay exactly the same. The light, after all, will come on under exactly the same circumstances, despite its purported ‘interpretational’ role.

I’m not sufficiently familiar with Piccinini’s mechanistic account to give a complete answer, but in a way, it doesn’t really matter too much. The example I gave just formalizes what we usually do when we compute: when we, say, punch numbers into a pocket calculator. We input symbols standing for certain numbers, and interpret outputs. That, I’ve shown, can be done in different ways.

Any account of computation wishing to resist this would then either have to claim that this isn’t right—but then, it just doesn’t capture computation as usually understood. Or, it would have to single out a given computation as being ‘the true one’—but this can only be arbitrary, as all the possibilities are connected to the box in exactly the same way. Or, it would have to argue that none of them are right—but then, it’s hard to say if anything of computation that goes beyond the mere physical behavior of the system gets left over.

I’m not saying that none of this can be argued for. But it seems to me that it’s perfectly valid to reject all of them by the mere pointing out that yes, when I use my box to compute f or f’, that’s just what computation is, and if an account of computation fails to support that, then it’s just a bad account.

You’ve claimed that:

And more fully:

So, a simulation of a brain can do anything a brain can do; which, since a simulation is a computation, means that anything a brain can do can be done by computation. It doesn’t have to be doable only by computation; it’s enough to claim that whatever a brain does, there’s a computational equivalent.

So if the above is right (that everything a brain can do has a computational equivalent), then that entails that whatever’s being done to do the interpretation can be done by computation. But that’s wrong, as my argument shows; hence, what the brain does when it interprets something can’t be done by computation, and consequently, a simulation of a brain must lack that capability.

Seriously, the entire point of my argument is that every computation must be rooted in something ultimately non-computational. If you’re now conceding that, then you’re throwing out the claim that what a brain does can be done by a computer. If you’re claiming now that there must be a non-computational element to brains in order to ground computation, then that’s exactly the claim made by my argument.

I have (from the beginning) accepted that computations can interpret things; the problem is, however, that they need to be themselves interpreted in order to do so. Which, if that interpretation is done computationally, leads to the regress; and if not, leads to computationalism being wrong, as there’s something that can’t be done by computation.

Again, this is false. Both are right about what they see.

Whereas here, one is either right or wrong to call that person dead. Seriously, I don’t get what you don’t get about that.

Exactly. Which is why consciousness differs from computation: for computation, an interpretation is needed, while consciousness is objectively definite.

You attempted to demonstrate that a system could self-interpret into computing something, by performing a computation (or at least, something that has a computational equivalent) that instantiates that interpretation. That is, you were trying to argue that the computation exists because the interpretation does, and the interpretation exists because the system instantiates it. The latter, however, is equivalent to some computation being implemented that instantiates the interpretation (provided that everything has a computational equivalent, as you explicitly claimed), and hence, the whole thing constitutes an attempt to infer that the computation exists from the circular argument.

Besides, note that this is actually incompatible with your new claim that computations aren’t necessary for interpretation: if computations are equivalent to interpretations (more accurately: if the computation that interprets P is equivalent to the interpretation that interprets P as computing), then of course the interpretation is computational.

In particular, the logical equivalence you claim there is explicitly in contradiction with this:

All my argument needs is for everything a brain does to have a computational equivalent; because that entails a claim that interpretation can be done, in every case, by computation. By showing that there must be cases where interpretation can’t be done by computation (as you now seem to agree), it’s shown that not everything a brain does has a computational equivalent, and thus, in particular, that there are some aspects of the brain that a simulation of it lacks.

Brilliant comic from Wondermark, relevant to the OP:

Maybe the gap between a symbol and it’s meaning isn’t real. If the brain stores sensory input as a compression and mapping back to the original sensors, and stores relationships between that chunk of data to other chunks of data including transitions over time etc., then the process of computation involving those chunks of data to maybe perform some prediction is just using chunks of data whose only meaning is in its relationship to how sensory input transitions over time.

It seems that the system makes sense in a specific environment and that no meaning is required unless we consider the relationship to the external environment (e.g. sensor data) to be the meaning, which still avoids the issue.

In other words, when the mind wants to plug in input to be manipulated, and then reviewing the result, it is just mechanically operating on chunks of data and relationships and rules that have been acquired by interacting with the environment, and there is no higher level interpretation required during this process.

Let me stop you right here. I cannot even begin to imagine what you thought I was talking about here. The box discussion was left behind quite a few posts back, and the switches and lamp states interpretation issue was resolved many dozens of posts back.

Let me recap because I think I can do it more succinctly than my somewhat long-winded approach just previously.

First of all the matter of interpreting switches and lights was resolved by my much earlier statement that in order to have a meaningful discussion and move the argument forward we need to have a common view of how switch positions and lamp illumination map to bit values – i.e.- a physical specification of the box. With this, we can usefully discuss the problem in the more meaningful abstract terms of bit strings. Moreover, the table of bit values provides (in my argument) a complete description of the computation, and the table plus the physical specification provides a complete description of the physical implementation.

What I was discussing in the prior several posts was no longer the box, but the implementation on a real computer of a programmatic function F that does the same thing as your box (capital “F” to distinguish the programmatic function from your mathematical function f). What we find here is that in a real computer, the interpretation of bit-position values is no longer subject to arbitrary whim but is fixed by the architecture of the arithmetic and logic unit (ALU). I don’t think a great deal of analysis is needed to demonstrate that invoking the programmatic function F(x1,x2) will return the values you set out in your definition of f in post #93, and not f’. Thus, f has been implemented uniquely.

Now your argument is going to be that any layer of interpretation-fixing just moves the interpretation dependency up to the next level, and that this continues in an infinite regress. But you’ve failed to show definitively that this is the case, or why it should be so, or how it ever ends, other by positing a magical non-computational intelligence. In fact I regard the claim as rather an absurdity, like the homunculus fallacy itself. It contains a limited truth because, as we’ve seen, layers of interpretation-fixing are indeed necessary, but rather than an infinite regress I maintain that relatively few layers are needed before the computation produces objectively observable real-world phenomena and hence reaches its end state.

What are these phenomena? It might be an autonomous intelligent robot finding its way around a room, hence my rather long-winded previous example. It might be a computer playing music, displaying a video, interacting on an equal footing with human players in a game of Jeopardy. None of those things require interpretation, at least not in any meaningful sense related to the interpretation-fixing problem of computation and the claim of infinite regress. **Each observable phenomenon is the final end state of the computation, and any further “interpretation” we might feel like inflicting on it is entirely separate and independent from the computation that produced it. **

I would maintain that a computer displaying text and numbers on a screen fulfills this criterion, though the argument is more nuanced because the text and numerals are explicitly intended for human consumption. But ultimately the end-state phenomenon argument here is the same as the phenomenon of the autonomous robot doing incontrovertibly physical things, and I find any argument here of interpretation-dependency to be entirely specious.

This view is also supported by its compatibility with computational theories of cognition, so that one can posit that perhaps the most important computational end state of all is the instantiation of human cognition, of thought, and of creativity and imagination. Furthermore, it’s not burdened with the basic problem of your position, that all computation requires the interpretation of a human observer, who makes the interpretation though some magical “non-computational” means, which is about as close to invoking magic and mysticism as anything I’ve ever heard seriously proposed.