Absolutely. However, the idea that consciousness is analyzable entirely in terms of behavior—known as ‘behaviorism’—is widely considered to be very problematic, and has had essentially no defenders since Noam Chomsky’s attack in the sixties. Hence, alternative approaches were proposed—such as functionalism, of which the computationalism being discussed here is a species.
Again, that’s right, but beside the point. Computation is usually regarded as, in some way, involving abstract quantities—something in the computer that pertains to, say, water levels and the like. In order to achieve that, we have to identify some physical states—voltage levels—that connect, in whatever way, to these abstract quantities. Any such connection involves an interpretation.
The computer’s ability to regulate the water level does not depend on this interpretation, however. What’s present here is essentially a sophisticated control circuit, that can be completely explicated in terms of cause and effect—voltage levels causing servos to activate, sensor activities increasing voltage levels, and the like. This is exactly the behavioral level of things; but, as it’s generally thought the behavioral level is insufficient to account for conscious experience, there must be something else coming along—and on computationalism, that’s the abstract quantities the computation is concerned with.
I don’t have a lot of time right now, but I will ask - what do you imagine the term “objective property” means?
Because from the sound of it, it just means “the thing is doing that thing, for realsies.”
Oh, and your device is probably running some sort of web browser right now. Is the fact your device is running a web browser an “objective property” of your device?
A property such that the claim that a system possesses it is objectively true (or false, as the case may be). Something is objectively true when its truth conditions can be met without reference to a subjective viewpoint.
It’s easier with examples, though. Objective properties are like Newtonian mass: a system has a mass m, and saying it doesn’t just means you’re in error. Subjective properties are like the meaning of the word ‘gift’: you can say it’s ‘present’, while I can say it’s ‘poison’, without either of us being objectively right.
Of course not. After all, it requires me interpreting the signs being produced by the device in a certain way to claim that it runs a web browser. This is, I grant, very hard to see with signs this familiar—words we just immediately understand, pictures that immediately have meaning to us. That’s why I use the box example (and also, why you ignore it): there, our intuition that the lamps just mean something is less strong. But the principle of the thing is the same, in either case.
So when you say that brains are conscious, you’re saying ‘a brain generates consciousness, and saying it doesn’t just means you’re in error.’
Um.
Okay, I get where you’re coming from, but I hope you realize that you’re just saying “You can’t prove that brains aren’t conscious; that’s absurd. We know that they are conscious. Don’t be dumb.” Which is fair - and exactly why I can substitute brains for computational devices into your argument and disprove it via argumentum ad absurdum. The formal logical structure of argumentum ad absurdum is:
Premise 1: The argument can be used to prove that A=false.
Premise 2: A = true
Conclusion: A ∧ ¬A –> And poof! One of the premises is false. If you are certain that Premise 2 is true, the Premise 1 must be false - and thus the argument must fail to prove what it claims to.
So anyway, let’s talk about your special pleading. You specially plead that regarding your argument to prove that X can’t generate consciousness, “human brains” can’t be inserted for X because it’s objectively true that they generate consciousness. Which (translated), means that you’re saying that the sole reason we can’t use your argument to say “brain’s don’t generate consciousness” is because brains generate consciousness.
And the reason that your argument can use “computational systems” for X, the sole reason that your argument can be used to prove that computational systems don’t generate consciousness, is… because computational systems don’t generate consciousness.
This is nonsense. The point is exactly that, while computation isn’t an objective property, conscious experience is; hence, computation can’t support conscious experience. But, brains don’t generate conscious experience via computation (and there’s no reason to think they do). So no, you can’t use my argument to somehow ‘prove’ brains don’t generate conscious experience.
No. I’m arguing—and that I have to explain this yet again is truly bizarre—that consciousness can’t be produced by computation. It can’t be produced by computers via computation; it can’t be produced by brains via computation. Hence, brains and computers stand on exactly equal footing in this regard.
Again, I have no idea how you would get that from my argument. I’m arguing that consciousness can’t be produced by computation, whether by brains or by anything else. Brains do produce conscious experience, but they don’t do so by computation.
Your argument, as presented with the stupid box, only incidentally references computation, and your argument makes no references whatsoever to the attributes and properties that computation may or may not have beyond “It can be used to produce inputs from outputs that are then later capable of being variably interpreted by an outside interpreter”. I get that you’d prefer that your argument actually said something meaningful about computational processes specifically as opposed to other things, but it utterly fails to do so.
See, the thing about logical arguments that aren’t broken and fallacious as fuck is that the elements of the arguments link together. A implies B implies C implies D. And your dumbshit argument utterly fails to link any specific thing about computation to any other part of the argument. There’s nothing in the argument that is exclusive to computation.
And that means that your argument applies to anything that can be replace “computational process” within your argument without robbing some other part of your argument of the support it needs. And since your argument doesn’t actually rely on any property or behavior of computation whatsoever, your argument can be equally applied to anything that will fit inside that damned box.
A logical argument is a structure. It’s a way of connecting facts and details one to another to reach a conclusion. And if you can replace one detail with a different detail in a way that doesn’t compromise the structure then the structure will work equally well to prove things about the replacement detail.
You can refuse to accept this all you want, but I do hope that every educated person who reads this thread will recognize that as you sticking your fingers in your ears and yelling “La la la la la I can’t hear you!”. I know that’s how I view that sort of deliberate ignorance about the well-known facts of logical argumentation.
So.
Your argument is either valid and capable of proving something about computation processes, or broken and unable to prove anything. If your argument is valid, then its structure is valid, and literally any other thing that can light up the lights of the box and be swapped in and the argument will remain valid.
That’s how arguments work.
So you have a few options here.
“La la la la la I can’t hear you!”
Demonstrate that there’s some part of your “box-observation-interpretation-circularity” argument that ceases to function when, and only when, a brain is put inside the box. Does the observer lose the ability to interpret the output in different ways? Does that poorly-thought-out-supposedly-circular garbage of yours somehow detect the mechanics inside the box and stop circling the drain? Something like that would be what you need.
Modify the argument into something different that does logically require that the subject under discussion be computational processes, and which will not work (as in, the structure of the argument will not work) if something other than computational processes are put inside the box.
Abandon the argument.
These are your only options. And it’s not me saying that, it’s the rules of logical argument themselves saying that. So don’t count on finding a fifth option - the rules of logic itself state that all fifth options are restatements of option 1.
There is a fifth option you fail to note, but all the more amply demonstrate: you still don’t understand the argument.
The argument explicitly associates different computable functions to a physical system, and in fact, demonstrates a general method to associate different computable functions to physical systems. Hence, it will simply not work for anything but computable functions. You may stomp your foot and hurl invective as much as you want, but that’s how it is.
I can only respond to the arguments you post. Specifically your box argument (this one from post #18), which very very explicitly doesn’t do that. It doesn’t even pretend to do that. Sure, the post tosses around the term “computation” a lot, but the actual argument is all about how the possibility of multiple interpretations supposedly causes “a vicious regress”. Nothing about the interpretation or the supposed “vicious regress” references or depends on the properties of computation in any way whatsoever.
In that post, I show how the box can be taken to implement binary addition—that is, how we can use it to perform the * computation* that takes as an input two two-bit numbers, and returns their output. I then show how we can use the same technique to make the box implement a different computation. The regress exactly originates in this multiple interpretability.
(I more explicitly define the functions in post #93.)
You can’t, for example, use the method to assign different biological functions to the box. You can’t interpret it as performing digestion, for example. It’s only because computations involve manipulations on symbolic vehicles whose content is arbitrary that the argument works; it’s what makes the regress possible, and it is a property that computations have, while things like biological functions and other objective properties don’t. I have been really, really explicit about that, giving for example the analogy with the meaning of a piece of text. I honestly don’t grasp how you can fail to understand that after all this discussion.
You keep saying “regress” as though your argument has shown that there’s some kind of problematic regress that can actually occur here. It’s worth noting that however much detail you put into the other parts of the argument, every time you get to the “regress” part you sort of skim over it without really getting into the details. It’s as though you don’t want to focus on it - despite it being absolutely critical to your argument being any kind of disproof.
In any case, let’s get back to the part of your argument you actually are interested in: the comparing of computers and colons.
The mechanics of digestion move matter around in an unambiguous way. The mechanics of the box unambiguously light up or refrain from lighting up a light. The meaning of the lit-up lights is subject to various interpretations, in much the same way that the meaning of a poop you find on your driveway is subject to debate. After all, what to some people is disgusting poop another person sees as useful fertilizer. Just as some people see f and other people see f’. It’s exactly comparable, in both cases the observers are seeing the same unambiguous output, and interpreting its meaning and usefulness differently based on their specific needs.
Suffice to say the same goes for the computations that are taking place in a simulator - the simulator’s memory state is unambiguous at any given moment, but one person can look at the simulator’s memory state and see a lush virtual world while another can look at the memory state and see a Jackson Pollock painting composed of 1s and 0. But the interesting thing about this situation is that of all the various theoretically possible interpretations, there’s one interpretation that manifestly exists and is actively occurring: the interpretation being employed by the simulator itself. This is the interpretation that is being used to modify the memory state into the next iteration of the simulation, after all: it’s the one that guides how the simulation changes and develops. The fact that the simulation is proceeding in its orderly manner from one state to the next proves that not only can that specific interpretation be used, it is being used.
(And yes, I’m aware that you have declared that computation can’t do interpretation due to it having low-level physical processes in it, unlike reality which has no low-level physical processes in it whatsoever. I reject that as obvious nonsense.)
When speaking of cognition, whether it’s being done by simulations or by brains, in both cases it’s entirely possible for an outside interpreter to observe the mechanics of the thing in action and not recognize consciousness as occurring there. That much is exactly the same. Also exactly the same is the fact that it doesn’t matter whether an outside observer interprets a neuron arrangement as being conscious; all that matters is whether the supposedly conscious thing actually is interpreting the situation in a way that generates consciousness. It’s not that other interpretations can’t also occur, but if the simulation/brain is carrying on in a way that produces consciousness, then the consciousness’s interpretation of the situation is being used and consciousness is occurring, outside interpreters be damned.
Well, it’s really a simple and immediate consequence of my argument. I’m happy to elaborate; if you felt at any point that I hadn’t detailed it sufficiently, you could’ve just asked, rather than just hit me with the underhand insinuation that I was deliberately skipping over it. You can’t really claim I haven’t been exceptionally patient with explaining the argument to you time and again. But then, I suppose without baseless accusations and unsubstantiated claims, there’s really not much you’d bring to this discussion.
So, once more: in order for a physical system P[sub]1[/sub] to implement a computation C[sub]1[/sub], it needs to be interpreted by an interpretation I[sub]1[/sub]. I[sub]1[/sub] is such that the physical state of the system gets mapped to a certain abstract object (such as a set of truth values, for example) at any given time during its evolution. Without I[sub]1[/sub], there is no C[sub]1[/sub].
Now, the assertion that I[sub]1[/sub] could be implemented computationally—that is, that computational processes could yield the necessary interpretation—runs into immediate and obvious problems. For suppose I[sub]1[/sub] is realized by means of a computation C[sub]2[/sub], implemented within some physical system P[sub]2[/sub]. Then, since for a physical system to implement a computation is to be interpreted in the right way, there needs to be an interpretation I[sub]2[/sub] to make P[sub]2[/sub] compute C[sub]2[/sub]. This interpretation is logically prior to the computation; without the interpretation, no computation.
But then, the regress immediately and unavoidably follows. For if we again suppose that I[sub]2[/sub] is realized by means of a further computation C[sub]3[/sub], we find again the need for an interpretation I[sub]3[/sub], in order to make the system P[sub]3[/sub] compute C[sub]3[/sub]. And so on: there is always a further interpretation necessary. The regress never bottoms out. That’s it; it’s really quite simple.
If you still have questions regarding this, don’t hesitate to ask!
Of course. But the matter of digestion is not the interpretation of the poop that’s produced, but rather, the production of same—or more accurately, the processes leading to this production. So this is unambiguous. On the other hand, the business of computation is indeed the function which the system is interpreted as performing.
So while you can interpret poop as whatever you like, this does not change anything regarding the process of digestion; while a different interpretation, leading to the function f’ being realized instead of f, completely changes the computation being performed. Hence, an unambiguous biological/physical process can provide the foundation for consciousness, since it is an objective property of the system, which interpretation does not change; yet, computation, as being relative to a given interpretation, does not suffice as a background to realizing conscious experience.
I hope it’s clear, now, that this can never be the case. In the regress above, taking the computation C[sub]2[/sub] that provides the interpretation I[sub]1[/sub] interpreting P[sub]1[/sub] as identical to C[sub]1[/sub] does not help: in order for P[sub]1[/sub] to realize I[sub]1[/sub]—in order for the system to interpret itself, so to speak—, thereby implementing C[sub]1[/sub], it must already implement C[sub]1[/sub], as that is what realizes I[sub]1[/sub]. But this is circular: the system must already interpret itself, in order to be able to interpret itself.
No. The causal evolution of the physical system is entirely determined by the physical state at any given point in time; how that state is interpreted plays no role at all.
I don’t understand what you mean by that, and don’t recognize it as anything I’ve argued so far. In any case, ‘rejecting something as obvious nonsense’ isn’t really an argument.
Sure. Objective properties of physical systems need not be obvious. A metal ball can be charged, but you won’t be able to tell by looking. What’s crucial, though, is that for two observers, only one can be right if they differ on whether a system is conscious (or, whether the ball holds a charge); with computation, both observers can validly differ on which computation is being performed, and neither has any claim to being objectively right.
HMHW, can you describe one of the better counters to this problem with computationalism?
I did some googling and there is a lot of content and didn’t find counters to this but it seems that people haven’t abandoned computationalism, so someone must have a decent counter to the problem.
Although late for this discussion, I have to point out here that you are assuming that people in the AI or computational fields do not know how to deal with the issue. The way you describe it is simply a fallacy.
IIUC manias show that people’s brains can get affected by regress too. The point here is that it seems to me that normal brains do have ways to avoid regression, I would think that figuring out why is that that then one should wonder why it would be impossible to use similar solutions in the electronic world. And as it turns out I have enough programming experience to realize that, yes, programs can encounter endless loops, or get bogged down interpreting one item forever, but there are ways to avoid such an endless regression.
Of course, one can (and I was) confused when in the coding world machine learning nowadays starts from regression, The definition is a bit different from philosophy that tell us that regression appears when justifications come for the reasons themselves, while in computing regression ‘is the reappearance of a bug in a piece of software that had previously been fixed’.
To be honest, the debate is pretty sprawling and, at least to my reading, somewhat muddled in places (there is often a sort of underhand slipping of meaning between notions of ‘information’, for instance—quietly substituting unproblematic Shannon-information with information as carrying semantic meaning). So I can’t really give the state of play a concise summary; but I think a good starting point is the SEP-article on Computation in Physical Systems, which does a good job at outlining the respective approaches to how physical systems should be associated to computations they perform.
I think that there’s ultimately a whole gaggle of reasons that the above argument isn’t universally accepted. Some are historical, some sociological, and some just are the plain hope that some account of how physical systems compute simply must be possible—after all, they just do!
Computationalism is an incredibly attractive theory, although what I think the real reason for this is, isn’t often clearly enunciated: in the end, it’s just that beyond the mind, computers are the only sorts of physical system that seem to have some connection to the abstract, to entities that are not obviously reducible to simple physics. So one might hope both that the way the mind is associated with its contents might be reducible to the way that a computer is associated with the computation it performs, and that this latter way admits of a simpler explanation than the former; then, one could at least see a strategy towards solving the mind-body problem, and that’s something few other approaches can even hope for.
Most other ways in the end boil down to either ignoring the problem—eliminativism: there’s not actually something to be explained, we’re all just massively confused—or to accept it as essentially unsolvable—panpsychism, neutral monism, dual-aspect theories and the like: there’s just experiential stuff around, and experiencing is simply what it does. Computationalism looks like a middle way between giving up and giving in, like it could yield a genuine explanation. Realizing that, ultimately, it can’t has been one of my greatest disappointments in philosophy.
So this hope is one reason computationalism keeps going (and I wouldn’t argue that it shouldn’t: it’s always possible that somebody develops a rock-solid theory of implementation that solves problems like the ones I’ve presented in this thread, in which case, I’ll be the first to sing its praises and jump back on the bandwagon). Many prefer to simply table this issue for now (‘…beyond the scope of the present paper…’ :p), attacking things that look more tractable.
On the other hand, the first such ‘triviality’ arguments, due to Putnam, really did leave a lot to be desired, and I think that there are genuine problems with it. Something like the counterfactual account—where you require that beyond a mere mapping of physical states to computational states, counterfactuals (‘if the system were in state S1, it would transition into state S2’) are also supported—suffices to cast doubt upon Putnam’s original conclusion. However, in my example, all the right counterfactuals/dispositions are preserved across all possible interpretations, and thus, this criticism doesn’t apply. But I think many are still somewhat idly convinced that such an argument brings ‘nothing new’ to the table, and it’s just a matter of fact of somebody having a good sitting down and working it out to come up with a counter. However, I simply fail to see any logical space for such a move.
There’s also a reply by the (quantum) computer scientist Scott Aaronson that I once hoped could resolve the issue. Each way of associating a system with a computation can be thought of as a mapping. This mapping has a certain complexity; Aaronson then proposes that once this mapping is of higher complexity than the original computation, the mapping ‘does all the work’ itself, essentially leaving the original system’s computation as nothing but an irrelevant vestige.
But this can’t work, for two reasons. For one, it won’t single out a unique computation, but at best, a complexity class of computations, and it doesn’t seem like that’s enough to associate a single mind to a physical system; but, more importantly, it falls prey to the regress, as the mappings are assumed to be computational themselves—and thus, in order to say whether a mapping exceeds some complexity threshold, we would first need to be able to uniquely associate this mapping to a given physical system—but that’s the problem we were faced with from the beginning. So this is just a non-starter.
Of course, the matter is easily settled, if one is prepared to give up naturalism (in the sense of materialism/physicalism). On such a view, computational vehicles (e. g., symbols) simply have semantic content, which isn’t reducible to non-semantic properties. But that’s essentially something like panpsychism or dual-aspect theory: it’s not clear that anything is really won—computationalism was originally born from the hope not to have to resort to such grafted-on bits of ontology that are just there to make experience experiential.
The other extreme kind of view also exists: that there aren’t really any semantic contents of computational vehicles, it’s just useful to talk as if there were. This is then importing eliminativism into computationalism, and erode its motivation from the other end. Since I think eliminativism is a non-starter, I don’t think this is a promising angle.
Interestingly, perhaps the most forceful advocate of this sort of view, Daniel Dennett, has more recently been making some conciliatory noises towards what he calls the ‘romantic’, anticomputationalist side—see his review of Terence Deacon’s Incomplete Nature, for example. I think that this might signal a turning point away from computationalism, but of course, only time will tell.
That might be the case, although I would be surprised if the complex sorts of behaviors involved in manias are really just a kind of execution loop. But this has nothing to do with the regress I point out. I’m not talking about a loop in the control flow, a program getting stuck, but a regress in the logical association between physical systems and the computations they supposedly perform. That is, before a system can ‘get stuck’ in an execution loop, it would need to implement a specific computation; but the association of that computation to a physical system is already what’s problematic (cf. my argument in post #18, and the elaboration in #93).
The question that I think doesn’t have an answer isn’t ‘how does the brain compute something like a mind’, but rather, ‘how does a physical system compute anything at all’; so I’m not pointing out a problem with the detailed computation that would be necessary to generate a mind or conscious experience, but rather, with the association of physical systems to computations tout court. In the examples I gave, I aim to show that this association always depends on interpretation; and if that’s right, then computation can’t be the right sort of thing to give rise to minds at all, because whether a physical system computes (not to mention, what it computes) depends already on the mental faculty of interpretation—which consequently can’t be computational itself, or face collapsing into infinite regress.
I’m not sure I get what you’re talking about. Regression in machine learning usually refers to a method of approximating data using a best-fit function (e. g. linear regression), while computation is generally defined in terms of recursion (more accurately, primitive recursive functions). These have nothing to do with the regress I point out.
You’re not so patient as to be able to resist the urge to use ad hominem as a way to dismiss my arguments.
So if I’m following you, a computational system can’t implement a computation without an observer interpreting its output. When the observer walks away no computation can occur, turning the computational system into a ________al system. It’s no longer actually a computational system because the scenario that allows a computation to occur always includes an interpreter to make the computation possible.
Have I mentioned lately that I hate your terminology? It’s deceptive and confusing and degrades the discussion. And by that I mean that your entire argument is utterly reliant on fallaciously conflating the computation with the workings of the system that produces it.
I repeat, your entire argument hinges on fallaciously conflating the terms.
So to tear out the foundations of this fallacy, I’m going to stop calling the things “computational systems”. I’m going to call them “computers”. If your argument isn’t dependent on fallacious verbal legerdemain it should work equally well with the term swapped in.
Anyway.
I find it a bit problematic that I don’t know what definition you’re intending to use for the term “interpretation”. I can think of two:
1: The thing accepting the input responds to specific inputs with responses that depend on what the input is.
2: The above, but the observer is sentient (and this matters for some reason).
If it’s definition 1, then every function is an interpretation by definition - and every interpretation is a function by definition. There is no regress because there’s no need for one - all your argument is saying is that for interpretation to occur you need one thing to listen to another thing. The two things can both be computers, because computers can implement functions.
If it’s definition 2, then your argument is “computers can’t do interpretations because computers are presumed to be unable to generate sentience and sentience is required for interpretation. No matter how many computers you stack up they still won’t be able to do do interpretations because they’re still presumed to be unable to generate sentience. And so, based on the fact that they can’t do interpretations, we can deduce that computers are unable to generate sentience.”
Now, it remains a fact that we don’t actually know what mechanism causes things to be sentient, but you’re hardly going to disprove computers’ ability to implement it with a circular argument like that.
Nothing about the interpretation changes anything at all about the computer or the processes it is carrying out within itself. Hence the computer is on the exact same footing as the biological system regarding interpretation and the effects of interpretation. The processes within the computer are an objective property of the system and are not carried out relative to or dependent on any outside interpretation at all.
The notion that interpretation can change what’s going on inside a computer is an obvious falsehood. The notion that you can’t variably interpret the output of a biological system is also an obvious falsehood. You are inventing differences which very certainly do not exist in the pursuit of your fallacious argument.
The differences you pretend exist, do not exist. If your argument can be applied to computers to prove that they can’t generate sentience, then your argument can also be applied to physical brains to prove that they can’t generate sentience. It’s both or neither.
(It’s neither.)
Not for any coherent definition of “interpret” that doesn’t beg the question.
Actually, rejecting something as obvious nonsense is simply me rejecting your premise. Your premise is that computers can’t do interpretations - if they could then you could simply have a computer act as the observer, or have the computer observe itself.
Your argument attempts to prove that computers can’t do interpretations by asserting that they can’t, and then using that to prove your conclusion that they can’t. I reject that assertion, based on a massive quantity of evidence to the contrary.
And as long as we don’t know how human brains generate consciousness, the same can be said about human brains. Two humans could look at a brain scan and disagree whether the brain was functioning correctly to generate consciousness. Both observers can validly differ on their conclusion, and neither has any claim to being objectively right.
One thing to notice is that you left out what I said later, “I would think that figuring out why is that that then one should wonder why it would be impossible to use similar solutions in the electronic world. And as it turns out I have enough programming experience to realize that, yes, programs can encounter endless loops, or get bogged down interpreting one item forever, but there are ways to avoid such an endless regression.”
That by the way was as a reply to post #93 where you continue to claim that “you’d have to complete an infinite tower of interpretational agencies in order to fix what the system at the ‘base level’ computes.” as pointed before that is the homunculus fallacy, that looks less relevant in computational science. Being problematic does not = ‘I can dismiss it then.’
Seems that you are going for the argument that because you do not understand how it could be done that therefore it can not be done. Sounds familiar.
That was just to see what definition you are using, just also to check if you are aware how confusing you can get by ignoring how your arguments look from the electronic side of things.
So, I did check your arguments and they still look as arguments from ignorance or remain confusing. When you said that “Consciousness can’t be downloaded into a computer, for the simple reason that computation is an act of interpretation, which itself depends on a mind doing the interpreting.”
Then I get confused because you go to point at binary conditions in switches and to interpret what inside something is looking at those switches that can interpret things differently from another, leading to regression. But as I see new developments, looking at single switches is not the only way science is looking to get results.
Not claiming here that that is the whole reply, If I understood what your argument is, what you are missing from recent research is that the brain is likely composed of many minds that are part of us, and that makes what we are. The collection does look at lots of predictions, drops the items that do not match the input and consciousness is part of what takes place during the “election” of the best choice based on multiple inputs.
As Jeff Hawkins from Numenta said elsewhere, after investigating the brain and looking for ways to replicate in programming how brains do it, “Intelligence and consciousness are co-dependent”.
Not true. On those rare and precious occasions you actually try to justify your claims in some way (if only by saying ‘I’m a computer programmer, so what I say is true’), I have taken care to appropriately reply. It’s just that, for the most part, you don’t seem to want to bother with that, preferring rather to call things ‘obviously nonsensical’, ‘stupid’ and the like, without ever really substantiating that, even upon repeated prompts on my part. Pointing that out is not an ad hominem.
You’re free to try and find a better terminology, but the one I’m using is pretty standard to this sort of discussion, so if you introduce new terminology, I’d ask you to kindly outline how it connects to the usual one.
And again, more unsubstantiated claims.
Once more, you can simply ask when you don’t understand a part of my argument, rather than make up some nonsense out of the whole cloth, propose it leads to absurd consequences, and then claim that this matters for my argument for some reason.
I would have thought that my first post in this thread should suffice to fix what I mean by interpretation:
So, interpretation is nothing but associating the states of certain parts of the physical system you’re examining with abstract objects, like Boolean truth values. In other words, it’s treating those parts as symbols, and fixing their symbolic content; interpreting what the symbols stand for. If you get a coded message, you have to interpret it to find its meaning; exactly this sense of ‘interpretation’ is meant here.
I have no idea what either of your definitions are supposed to have to do with interpretation in any sense.
Exactly. As I have pointed out numerous times to you by now, that’s the point: the same physical system, being physically unchanged, can be interpreted as implementing different computations, by changing the way its symbols (switch states, lamps…) map to their contents. This is not a change in the system in any way, but merely, a different way of using the system.
No. Whether a system digests is not a matter of interpretation; whether it computes f is. I remind you that if you disagree with this point, the option to providing an example of a system that uniquely implements f is still open to you.
Interpretation doesn’t change what’s going on inside a computer; the fact that interpretation changes what’s being computed proves that computation isn’t something that goes on in a computer (an objective property of the computer), but rather, something related to the way the computer is regarded, or used.
On the other hand, you can interpret the output of biological systems any way you like, but that doesn’t change anything about the biological processes being performed; it merely adds an irrelevant interpretational gloss. The output of digestion is a physical substance, poo, not the interpretation of that substance as waste or fertilizer, while the output of a computation is its result, say, the sum of two numbers. Digestion itself is then defined by the production of that substance, not by its interpretation. Computation, on the other hand, is defined by the interpretation; without an act of interpretation, there is no fact of the matter regarding what is computed.
The argument can be applied to brains, proving that they can’t generate conscious experience by computation. The argument only applies to processes that depend on interpretation, which includes computation, but not biological processes like digestion—there is an objective fact about whether a system digests, but not about whether it computes.
Excellent! Then please provide that definition.
This is not a premise, it’s the conclusion—more accurately, a corollary—of my argument. Given that all computation depends on interpretation (as demonstrated by the explicit example of how a computational system can be interpreted as implementing different computations), interpretation can’t be computational, on pain of infinite regress. So in order to reject this, you must find a flaw with the argument.
Again, I don’t assume that they can’t, I demonstrate that they can’t, using my argument. And also again, if you have evidence to the contrary, then don’t hog it, share it! Just one single example of a computation interpreting itself, or another one, would suffice.
Nonsense. The fact that we might not know which one is right doesn’t mean that both are. Rather, one is right, and the other is wrong, in an objective sense; we just don’t know which. ‘The moon is made of green cheese’ is either right or wrong, even if we have no way to tell which. The facts of the world are not relative to our ability to ascertain them.
Because brains don’t do it in a computational way; after all, my argument demonstrates that it can’t be done in a computational way. You’re welcome to find fault with the argument, of course, but to claim that the way brains do it should be implementable in a computer is to assume that brains are computational, which however is exactly the question under discussion, and thus, circular.
No. I have proposed an argument demonstrating that computations are relative to interpretation, and thus, can’t suffice to produce mental capabilities (notably, that of interpretation). This isn’t me saying that I can’t see how one could implement a computation generating conscious experience, it’s me giving a reason to believe that it can’t be done—so, not an ad ignorantiam.
Of course, that reason may be flawed; if so, you’re welcome to point out the flaws in my argument, or take up my challenge from earlier to come up with a counterexample.
My arguments apply to a more general question, namely, that of whether physical systems uniquely implement a computation. So, I fix a physical system, and propose a computation (binary addition) that the system performs. It actually does perform that computation: you can read off the sum of two binary digits you entered in just the same way as you can do so using your pocket calculator.
Then, I point out that one can equally well use the same system to perform a different computation—my f’—and in fact, many others. So that answers the question: in general, there is no unique fact of the matter regarding what computation is performed by a physical system; in fact, which computation is performed requires and act of interpretation. Since all notions of computation are equivalent (in the Church-Turing-sense), this immediately generalizes to all formulations of computation you could propose, and isn’t limited to Boolean circuits.
This has no bearing at all on my argument (and I can’t really figure out why you would think so). Adding more computations doesn’t do anything to make computation and more of an objective property of a system.
I’d pretty much dropped out of this futile argument, but this sort of nonsense has drawn me back to reiterate a key point. Your last paragraph above, for instance, is clearly implying that a Turing machine doesn’t do computation, yet the concept was developed precisely to define what computation actually is!
I’ll expand on this point by responding to some things you said earlier.
No, contrary to your claim in #287, there are an infinite number of interpretations. I could define the positional value of the bits to be anything I like that is consistent with what the box does, just as in your function f’. For instance, the positional value of the switches might be 2[sup]n+1[/sup] and 2[sup]n[/sup] for any value of n instead of 2[sup]1[/sup] and 2[sup]0[/sup], and of the lights to be 2[sup]n+2[/sup], 2[sup]n+1[/sup], and 2[sup]n[/sup]. Since n is an infinite sequence of positive integers, that’s sufficient to show that there are an infinite number of interpretations and hence an infinite number of possible function tables, so according to your reasoning, an infinite number of computations are being performed.
You’ll notice that I still have only one table defining the computation (or more accurately, as noted below, one table with a specified mapping of its symbols to the box’s physical I/O states), and that one table completely defines the behavior of the box. You’d need an infinite number of tables to specify all the possible versions of what you regard as the “true” computations, none of them more valid than any other. This alone should be sufficient to put this silly argument to rest, let alone the fact that it directly contradicts the well-established premise of the computational theory of mind.
The answer simply depends on whether you stipulate the same mapping of switch position AND light status as in my table. The point being, the table is not the box, it’s a representation of the box, so we need an agreed convention for how those symbols relate to their physical implementation on the box. So to the question of why not use your table instead: Because if we have the same convention as mine, the second table is wrong, but if we isomorphically invert the meaning of 0 and 1 for either the lights or the switches, the second table becomes the same as the first one, because an operator following the table would be creating the same inputs and getting the same outputs.
In short, any table you claim is different from mine and yet equivalently valid must in fact be one of the following: (a) wrong, so that it’s not equivalently valid, or (b) subject to an inverted interpretation of the physical states on the box, such that it’s logically the same table because it represents exactly the same operations.
Begbert2 already answered this, and I agree with him. This is really a bit of linguistic sleight-of-hand more than anything else, along the lines of asking what happens when an irresistible force meets an immovable object. We might commonly refer to your function f, binary addition, as a “computation” in most contexts, but in the context of your hypothetical box it’s been necessary to distinguish the underlying computation, defined either by a table with symbol mappings or by a corresponding set of rules, from arbitrary interpretations of what those symbols mean. It’s possible to build interfaces, like robotic actuators, that unambiguously fix the interpretation of the machine’s output, but those are not computing devices. As we saw, your hypothetical box only performs the symbol manipulation; it provides the computational basis for you to infer the function f, or f’, or whatever other variant you might prefer, but it cannot be constrained to the specific interpretation of any particular function. The arbitrary nature of the function definition and its decoupling from the symbolic output of the machine, according to your own description, makes your challenge meaningless.
In summary: the table I defined earlier, along with rules for mapping the symbols to the physical states of the machine’s inputs and outputs, constitutes a complete description of the computation performed by the machine. That is all that is required for someone to build a functionally equivalent machine, because that’s all there is. And this is obviously not an “interpretation” in the sense that your functions are; it just describes the relationship between a specification on a piece of paper and the physical machine.
Any claim that something more is needed to describe what the machine “really does” is just a philosophical flight of fancy, and trying to use such ruminations to “prove” that the brain cannot be computational is more sophistry than science. After all, we already know that we have the technology to build robots that successfully mimic many aspects of what goes on in the mind, and philosophers like Dreyfus who predicted intrinsic limits on how far this could progress – or Searle who claims that computers are “only” symbol processors and fails to understand that symbol processing and intelligence are the same thing – have been proven fundamentally wrong.
So you mean my first definition then: “The thing accepting the input responds to specific inputs with responses that depend on what the input is.” Abstract symbols are decoded as having specific meaning, which is reacted to depending on that decoded meaning. It’s a thing that computers can do, so there is no regress and your argument implodes.
Computers do this kind of interpretation all the time. Literally, all the time. Heck, you want to know what they call a computer program that reads a bunch of encoded instructions and performs operations after reading those instructions? An interpreter. It’s a standard term. True story.
Wanna know a specific example of an interpreter? The browser you’re reading this post on. Web pages are encoded as HTML containing javascript, and javascript is an interpreted language. You are literally staring at a computer doing interpretation this very instant.
And any one of those interpretations can be done by another computer, meaning that “computations” can be done by any pair of computers - one providing the input, and another interpreting it.
Or any single computer, encoding information and then decoding it later. Which literally every computer program in existence does.
How about you first have your colon generate a poop that when dumped on somebody’s driveway will only allow for the single interpretation that it’s a message equivalent to the Black Spot and that the occupant of the house has been marked for death.
Or, heck, how about you use your physical brain to direct your hand to draw an unadorned and unlabeled black circle on a piece of paper that can only be interpreted as meaning that the person who reads it is marked for death.
You can repeat your special pleading over and over but it’s still special pleading. No matter whether you decide to interpret it otherwise.
If you think your brain doesn’t interpret things then - wait a minute! That explains why you’re ignoring my arguments! Your brain can’t interpret the meanings encoded in the text of my posts!
Your argument requires, as a premise, that computers can’t do interpretation. That’s a required assumption of your argument, because if computers can do interpretation then your argument has no regress whatsoever - Computer A’s output is interpreted as doing a calculation by computer B, full stop.
Which means that your argument is assuming its conclusion, and thus it’s logically impossible for it to prove anything. In logic, that’s considered a flaw.
Look at the device that you’re reading this on. Look at the program that contains this post. That program is sending event messages to itself, which are interpreted by its message loop and, by decoding the numerical message codes into their specified meanings, routes the control flow accordingly.
Your argument assumes its conclusion, and because it assumes its conclusion it doesn’t demonstrate anything.
Seriously, you can’t use a premise to prove itself. Arguments don’t work that way. And the argument that you’re making doesn’t work if it doesn’t assume its conclusion.