The term “end product” might be more descriptive than “end state” in the above.
But then, how can the same symbol, placed in the same physical relation to whatever it connects with, have different meanings, as in my box example?
I don’t think I quite understand what you mean here, but you’ll have to be careful with invoking notions like ‘data’, since that already implies an aboutness, and hence, meaning. Data without meaning is just voltage levels, or maybe neuron spiking frequencies, but then, we simply haven’t made any progress on the question, since relations between neuron spiking levels are just that, and don’t connect in any objective way to things out there in the world.
And still, that leaves open the question of what, exactly, I’m doing when I use my box to implement a given function, if not just computing that function—in the perfectly ordinary sense in which we use the word when we compute square roots using a pocket calculator. But if that’s actually just what computation is, then the ambiguity is inherent to it.
It was? Nobody told me!
If you actually mean ‘bit strings’, then that isn’t a physical specification, but an interpretation—of lamp and switch states as representing logical values. Then, if you fix the interpretation, then sure, the interpretation is fixed, but it’s still just an arbitrary interpretation.
If you, on the other hand, intend for the ‘bit strings’ to just be names for switch and lamp states (i. e. you just call ‘switch up’ ‘1’, use the symbol ‘1’ to—somewhat impractically—simply denote ‘switch up’, as a kind of shorthand), then your supposed ‘computation’ just ends up being the physical behavior of the system, and the notion of computation collapses to that of behavior, leading to behaviorism.
So no, I’m afraid I can’t agree that the box-discussion has been resolved; it’s still the central issue.
This comes at least close to equivocation on the notion of ‘bit strings’ (which is why I’d prefer not to use that notion, and rather, if we’re talking about the physical behavior, talk in terms of ‘switch up’ and ‘lamp on’). If the bit strings are just shorthand for switch states (etc.), then the table doesn’t describe any computation at all, since it just describes the physical behavior of the system, and computations aren’t physical behaviors (any more than the meaning of a word is the word itself). If the bit strings, on the other hand, are meant to be the abstract logical entities that are merely denoted by the symbols ‘0’ and ‘1’—say, the elements of the Galois field GF(2)—, then you’ve just taken an arbitrary interpretation, and propose now that this is the one true interpretation, while in fact others are possible.
This is just begging the question against my argument, in that it assumes that it’s possible for the computer to uniquely instantiate F. But this unique instantiation is what my argument shows doesn’t exist.
It’s not. I’ve given just in the last post an explicit example of how this strategy fails, and yet, you still blithely assert that it works!
It will not return any values of a function. It will produce symbols that, in order to be connected to values of a function, need to be interpreted accordingly. Again, the numeral ‘5’ and the number it denotes are different things—the former being, in this instantiation, composed of a particular array of pixels, the latter being an abstract object. The computer will, at best, output the numeral ‘5’; but what number that numeral refers to depends, once more, on the interpretation: on how it’s understood, on what abstract object it’s taken to be, on what it means.
Let’s return to the box once more (again, I can only appeal to the virtue of simplicity: it’s too easy to get confused when thinking about systems no human being could possibly consider in their entirety—that’s why you find you always have to appeal to ALUs or robots). To make it compute f, we need to interpret its symbols in a certain way. Symbols, here, are just the switch and lamp states; they’re symbols in exactly the same sense as, say, pen-marks on paper, or pixel-patterns on a screen. The set of four symbols ‘four’ denotes the number four (and not by virtue of there being four symbols!). It doesn’t do so by any characteristic of the symbols, but merely, by interpretation. In the same sense, the set of symbols (↑↓) may denote the number two (with the arrows denoting switch positions in what I hope is an obvious way), and a two such symbol sets a tupel of numbers, e. g. (↑↓, ↓↑) denoting (two, one). So the switch states may mean:
Symbol | Meaning
-----------------------
(↓↓, ↓↓) | (zero, zero)
(↓↓, ↓↑) | (zero, one)
(↓↓, ↑↓) | (zero, two)
(↓↓, ↑↑) | (zero, three)
(↓↑, ↓↓) | (one, zero)
(↓↑, ↓↑) | (one, one)
(↓↑, ↑↓) | (one, two)
(↓↑, ↑↑) | (one, three)
(↑↓, ↓↓) | (two, zero)
(↑↓, ↓↑) | (two, one)
(↑↓, ↑↓) | (two, two)
(↑↓, ↑↑) | (two, three)
(↑↑, ↓↓) | (three, zero)
(↑↑, ↓↑) | (three, one)
(↑↑, ↑↓) | (three, two)
(↑↑, ↑↑) | (three, three)
We can do the same thing for the lamp states off ● and on ○:
Symbol | Meaning
-----------------------
(●●●) | (zero)
(●●○) | (one)
(●○●) | (two)
(●○○) | (three)
(○●●) | (four)
(○●○) | (five)
(○○●) | (six)
(○○○) | (seven)
We can then once more take the behavior—the physical behavior—of the box:
Switches | Lamps
-----------------------
(↓↓, ↓↓) | (●●●)
(↓↓, ↓↑) | (●●○)
(↓↓, ↑↓) | (●○●)
(↓↓, ↑↑) | (●○○)
(↓↑, ↓↓) | (●●○)
(↓↑, ↓↑) | (●○●)
(↓↑, ↑↓) | (●○○)
(↓↑, ↑↑) | (○●●)
(↑↓, ↓↓) | (●○●)
(↑↓, ↓↑) | (●○○)
(↑↓, ↑↓) | (○●●)
(↑↓, ↑↑) | (○●○)
(↑↑, ↓↓) | (●○○)
(↑↑, ↓↑) | (○●●)
(↑↑, ↑↓) | (○●○)
(↑↑, ↑↑) | (○○●)
Now, this isn’t a computation. It’s just a physical behavior. If you say that that’s a computation, then everything computes, but what it computes is just its own behavior, and the challenge of getting a mind to come from the *computation *of the brain is the same as the challenge of explaining how mind could come from the *behavior *of the brain simpliciter, without any notion of ‘computation’ in the middle. These are the symbols; the computation—the computable function realized in the computation—is defined in terms of what these symbols represent. To that extent, we could use the interpretation posited above—and then (and only then) does the system compute something—implement a computable function—and that will be the function f.
But of course, there’s nothing special about that particular interpretation. We can equally well use another, such as this one:
Symbol | Meaning
-----------------------
(↓↓, ↓↓) | (zero, zero)
(↓↓, ↓↑) | (zero, two)
(↓↓, ↑↓) | (zero, one)
(↓↓, ↑↑) | (zero, three)
(↓↑, ↓↓) | (two, zero)
(↓↑, ↓↑) | (two, two)
(↓↑, ↑↓) | (two, one)
(↓↑, ↑↑) | (two, three)
(↑↓, ↓↓) | (one, zero)
(↑↓, ↓↑) | (one, two)
(↑↓, ↑↓) | (one, one)
(↑↓, ↑↑) | (one, three)
(↑↑, ↓↓) | (three, zero)
(↑↑, ↓↑) | (three, two)
(↑↑, ↑↓) | (three, one)
(↑↑, ↑↑) | (three, three)
Symbol | Meaning
-----------------------
(●●●) | (zero)
(●●○) | (four)
(●○●) | (two)
(●○○) | (six)
(○●●) | (one)
(○●○) | (five)
(○○●) | (three)
(○○○) | (seven)
If I haven’t gotten confused with the rows, then under this interpretation, the box will implement f’; at any rate, under this interpretation, it will implement a different computation. The reason for this is simply that I’ve changed the meaning of certain words: the word ○○● now means three, rather than six, for instance. But since there’s no way in which it’s more right to consider ○○● to mean three rather than six, that’s just as well.
Consequently, if we want to associate the box with any computation—with any computable function—at all, we have to interpret its states as symbolic entities; or, at least, by interpreting its states as symbolic entities, we can evidently make the box implement a certain computation. Without something of that kind, the box just possesses a certain physical behavior, and we could’ve just skipped talking about computations at all, since the notion doesn’t add anything.
Now consider the oft-repeated, but never substantiated claim that certain further mechanisms could supply the interpretation. Say, as above, you want to interpret the output of the box as either even or odd numbers. Thus, you add another light, which you call the ‘interpreter’, and have it light up in response to the others in this way:
Switches | Lamps | Interpreter
----------------------------------
(↓↓, ↓↓) | (●●●) | (●)
(↓↓, ↓↑) | (●●○) | (○)
(↓↓, ↑↓) | (●○●) | (●)
(↓↓, ↑↑) | (●○○) | (○)
(↓↑, ↓↓) | (●●○) | (○)
(↓↑, ↓↑) | (●○●) | (●)
(↓↑, ↑↓) | (●○○) | (○)
(↓↑, ↑↑) | (○●●) | (●)
(↑↓, ↓↓) | (●○●) | (●)
(↑↓, ↓↑) | (●○○) | (○)
(↑↓, ↑↓) | (○●●) | (●)
(↑↓, ↑↑) | (○●○) | (○)
(↑↑, ↓↓) | (●○○) | (○)
(↑↑, ↓↑) | (○●●) | (●)
(↑↑, ↑↓) | (○●○) | (○)
(↑↑, ↑↑) | (○○●) | (●)
Does this help? No, not at all: while under the interpretation giving rise to f, you could interpret the light as referring to the notion ‘odd’, under a different interpretation, it will simply fail to do so, and instead refer to some other partitioning the set of the first six numbers into two. Thus, without the interpretation of the interpreter being fixed, it’s simply not doing any interpretation; so this just kicks the can down the road a little further. And since this is a completely arbitrary example, there’s no loss of generality, and this will occur for any attempt at interpreting the computation in a computable way—that is, with symbols themselves demanding interpretation.
There’s no reason at all why the non-computational should be magical (other than your say-so). In fact, it should strike one as rather odd if everything in this world were computational—after all, there are non-computable phenomena (at least according to our best current physical theories), and some facts about the world must fix these phenomena, and consequently, those facts can’t be fixed computationally.
But even in what seems to me the simplest conceivable case—a single lamp, as in the above ‘interpreter’—we see that the problem only becomes compounded. So how could it ever be solved, instead? If each computation demands an interpretation, then we’re already off into the regress—that’s just a logical consequence. Or can you give an example where the regress just ‘bottoms out’?
That’s still the mistaken claim that the state of the lamps is what’s being computed by the box; but it’s not—the state of the lamps (the behavior of a robot, the playing of music, the showing of a video) is just the physical behavior of the system.
Take, as an example, navigating a boat. If you turn the wheel one way, the rudder responds by changing its orientation appropriately. This might be implemented by a simple physical transmission of steering-action to rudder-orientation. The rudder does not interpret the wheel’s turning, it is simply influenced by it, through mere physical causality.
Now, the same holds if the orientation of the wheel is measured by a sensor—say, producing a positive voltage in proportion to the amount of turning if it’s turned one way, and a correspondingly negative amount if it’s turned the other way—and that voltage then sent to engage some appropriate servo turning the rudder: it’s still, at best, a short-hand way of speaking to claim that the rudder interprets the signal of the wheel; but what goes on is still merely physical causality—the turning causes a certain voltage to be applied, which causes motors to move, which causes the rudder to turn.
Then say you replace the wheel by a simple light-detection mechanism, perhaps something that responds to brightness (light above a certain baseline) by producing a certain positive voltage in proportion to the amount of light, and darkness (ambient light below a certain baseline) by producing a certain negative voltage. This will then turn the rudder in the same way as the wheel did (it cares not about what provides the voltage), and the ship will move towards where the light’s just right, and away from the darkness (or the other way around, as the case may be).
Is this due to the computation being implemented by the system of sensor and rudder? Well, no: it’s just again sheer physical causality. It moves towards the light because the presence of too much or too little light causes a certain voltage, which causes a certain servo movement, which causes a certain rudder turning; there’s no need to talk about anything interpreting anything else, much less computation, at all.
Any movement a robot performs can be explained in just the same sense (it’s just that the explanation will be much more complicated), as can the production of noises or images.
If that were the case, then the output of any computation would just be text and numerals; nobody would ever have computed, say, a sum, but only produced certain signs. If we want to claim that we’re computing sums, or square roots, as we certainly should want, since we’re doing it all the time, we can’t hold that this level of mere symbols exhausts the notion of computation.
Maybe it’s just a brute difference of interpretation, but to me, the idea that the production of consciousness could be due to the mere shuffling around of symbols—without any interpretation, any meaning attached to them—strikes me as just the same idea as that the production of, say, rain could be just due to the right symbols being manipulated in the right way—that is, saying the right spell. It’s just the fundamental confusion that the symbols themselves, rather than what they mean, is important; that the universe, say, listens to the right Latin phrase in some special way.
I apologize.
I’m actually not sure how to refute “You’ve never made any arguments at all” with cites. Clearly no arguments I have made would be considered arguments that I’ve made arguments, if the argument is that my arguments aren’t arguments.
Heh. That should’ve been ‘difference of intuition’…
I have been invoked!
Your last paragraph is obviously wrong. Consider what you’ve described:
“Now consider the additional lamp to guide the robots behavior. ‘Light on’ will cause it to ‘turn left’, and ‘light off’ will cause it to ‘turn right’. The lab door is to its right, the wall to its left.”
Thus:
Light is on = turn left = crash.
Light is off = turn right = no crash.
“Does a change in the interpretation now result in a change of behavior?”
Changing interpretation now:
Light is on = turn right = no crash.
Light is off = turn left = crash.
“No, of course not; the behavior will stay exactly the same.”
-Obviously false. DUH.
“The light, after all, will come on under exactly the same circumstances, despite its purported ‘interpretational’ role.”
As you have made very clear, the interpretation is entirely in the eye of the interpreter and the fact that interpretation is happening has no impact on the behavior of the light itself. The light does come on in the same circumstances, but the change in the the interpretation that the robot is doing causes the robot to behave differently.
Come on man, this is fundamental to your argument. f and f’, remember?
But it’s NOT enough to claim that whatever a brain does cannot be done by a noncomputational method. That “only” in the quoted sentence wasn’t italicized by accident.
I have never said that interpretation requires computation. (Particularly under the back-assward definition of “computing” that’s under discussion). That’s a strawman that you made up to hinge your argument on - and without that strawman you have no hint of an infinite regress, and no argument.

So if the above is right (that everything a brain can do has a computational equivalent), then that entails that whatever’s being done to do the interpretation can be done by computation. But that’s wrong, as my argument shows; hence, what the brain does when it interprets something can’t be done by computation, and consequently, a simulation of a brain must lack that capability.
Seriously, the entire point of my argument is that every computation must be rooted in something ultimately non-computational. If you’re now conceding that, then you’re throwing out the claim that what a brain does can be done by a computer. If you’re claiming now that there must be a non-computational element to brains in order to ground computation, then that’s exactly the claim made by my argument.
The following statements are NOT equivalent:
Interpretation can be done by both computational things and noncomputational things.
Interpretation can’t be done by computational things.
Because those statements are not equivalent, “conceding” the first statement doesn’t allow the second statement to be deduced, because DUH.
God, what an obvious error. I had no idea that your argument was literally intentionally based on something so obviously wrong.

I have (from the beginning) accepted that computations can interpret things; the problem is, however, that they need to be themselves interpreted in order to do so. Which, if that interpretation is done computationally, leads to the regress; and if not, leads to computationalism being wrong, as there’s something that can’t be done by computation.
Not according to your own definitions. It’s not the case that computational systems need to be interpreted before they can do things such as interpreting - your definitions explicitly say that that’s not the case, because how a system is being interpreted has no impact whatsoever on what the system is doing. That’s right in your definitions.
Your definitions disprove your argument. I mean, right out of the gate, they disprove your argument.
Or, more precisely, they disprove your strawman. You don’t have to make a dubious recursion argument to use argumentum ad absurdum to disprove the claim “doing interpretation requires computation” - you can disprove it straightforwardly by citing the definition of “computation” to translate the phrase as “doing interpretation requires something else to be interpreting you”, and then cite the definition of interpretation to notice that whether or not something is interpreting you cannot change whether or not you’re doing something. Poof! The strawman claim is directly disproven.
Anybody whose positions rely on that claim should be quaking in their boots. I’m not, because my positions don’t rely on that claim and I’m not wearing boots.

Again, this is false. Both are right about what they see.
Whereas here, one is either right or wrong to call that person dead. Seriously, I don’t get what you don’t get about that.
"Hey, I’m not the one who said “It’s precisely the fact that all interpretations are on equal ground that’s the problematic bit (for computationalism, that is), since it means that there’s no objective fact of the matter regarding what computation a system instantiates—in opposition to the fact that there’s an objective fact of the matter regarding whether it’s conscious.” It’s not my fault that you made such an absurd statement.

Exactly. Which is why consciousness differs from computation: for computation, an interpretation is needed, while consciousness is objectively definite.
I don’t disagree!
Of course, this is only true because your back-assward definition of “computation” is structured such that whether a computational system is doing computation has literally no relation whatsoever to what the “computational system” is actually doing.

You attempted to demonstrate that a system could self-interpret into computing something, by performing a computation (or at least, something that has a computational equivalent)
BZZZT! You can’t use legerdemain to equate computations and noncomputational behaviors and then base an argument on me making that error. Fail!

Besides, note that this is actually incompatible with your new claim that computations aren’t necessary for interpretation: if computations are equivalent to interpretations (more accurately: if the computation that interprets P is equivalent to the interpretation that interprets P as computing), then of course the interpretation is computational.
I never claimed that computations are necessary for interpretations. That’s your strawman right there.

All my argument needs is for everything a brain does to have a computational equivalent; because that entails a claim that interpretation can be done, in every case, by computation. By showing that there must be cases where interpretation can’t be done by computation (as you now seem to agree), it’s shown that not everything a brain does has a computational equivalent, and thus, in particular, that there are some aspects of the brain that a simulation of it lacks.
I have never said that there are cases where interpretation can’t be done by “computation”. I have said that there are cases where it isn’t done by “computation”. Even when a “computational device” is the thing doing the interpretation, because the back-assward definition of “computation” we’re using is one that doesn’t actually describe the thing it’s describing - it describes the behavior of the interpreter.
Of course, by your own definitions when any system a) does interpretation of some thing, and b) the thing that it’s interpreting was produced by itself, then that interpretation is being done by a computation, because according to the definition of “computation”, the system now qualifies.
This describes every computer and computer program you’ve used - and every brain of any person you’ve ever met. So it seems they’re all doing computation, by your definitions. (Whether any of them qualifies as a computational device, of course, depends on how one specially pleads.)

But then, how can the same symbol, placed in the same physical relation to whatever it connects with, have different meanings, as in my box example?
It doesnt, it only has the relationship to its external environment encoded by sensors. So if you allow me to expand yoyr box a little bit so it is more like an organism that is interacting with its environment it will make more sense, but i’my fully understanding why you chose a simple example.
If your box is the brain and the switches are sensitive enough to be flipped up or down based on specific frequencies of light hitting them or not currently hitting them, then they represent some physical state that the organism/box can map through trial and error to some internal state (good/bad) based on other physical aspects and requirements for survival of the system/organism. This mapping seems like it represents meaning in this context and in a different context (e.g. alternate environment in which sound triggers switches and the organism gets benefit from sound instead of light) it could either be argued it has a different meaning (light vs sound) or the same meaning (internal good vs bad).
I suspect you will respond that this is just behaviorism which is generally considered an insufficient explanation which leads to this question: can you summarize the weaknesses with behaviorism?
I don’t think I quite understand what you mean here, but you’ll have to be careful with invoking notions like ‘data’, since that already implies an aboutness, and hence, meaning.
Then maybe replace data with state.
And still, that leaves open the question of what, exactly, I’m doing when I use my box to implement a given function, if not just computing that function—in the perfectly ordinary sense in which we use the word when we compute square roots using a pocket calculator. But if that’s actually just what computation is, then the ambiguity is inherent to it.
I need to think about the human using the calculator, there are some interesting things going on there that i never really thought about, will get back to this.
**HMHW ** ignore my question about behaviorism, I just read about it in sep and it seems pretty extreme to ignore mental beliefs etc.
Maybe you would call my slightly extended box example physicalism?

Let’s return to the box once more (again, I can only appeal to the virtue of simplicity: it’s too easy to get confused when thinking about systems no human being could possibly consider in their entirety—that’s why you find you always have to appeal to ALUs or robots).
Well, let’s not return there, you are still missing that many researchers are dealing with more complex things and you are still going as if your point is important, at some levels it can be, but not in many others. When going forward, the Chinese room seems to be useless for AI prediction.

I’m actually not sure how to refute “You’ve never made any arguments at all” with cites. Clearly no arguments I have made would be considered arguments that I’ve made arguments, if the argument is that my arguments aren’t arguments.
So, you’re saying that this sort of requirement, to have an argument to ground each argument, would kinda trap you in a, hmm, let’s call it ‘neverending loop’, or perhaps ‘boundless feedback’, and thus, never actually allow you to make any argument at all? Hmm, interesting point!

I have been invoked!
Your last paragraph is obviously wrong. Consider what you’ve described:
“Now consider the additional lamp to guide the robots behavior. ‘Light on’ will cause it to ‘turn left’, and ‘light off’ will cause it to ‘turn right’. The lab door is to its right, the wall to its left.”
Thus:
Light is on = turn left = crash.
Light is off = turn right = no crash.“Does a change in the interpretation now result in a change of behavior?”
Changing interpretation now:
Light is on = turn right = no crash.
Light is off = turn left = crash.
This isn’t a change in interpretation; it requires a change in wiring, in hardware. The robot moving is like the light coming on in the example above. For convenience:
Switches | Lamps | Interpreter | Robot
-----------------------------------------------
(↓↓, ↓↓) | (●●●) | (●) | (right)
(↓↓, ↓↑) | (●●○) | (○) | (left)
(↓↓, ↑↓) | (●○●) | (●) | (right)
(↓↓, ↑↑) | (●○○) | (○) | (left)
(↓↑, ↓↓) | (●●○) | (○) | (left)
(↓↑, ↓↑) | (●○●) | (●) | (right)
(↓↑, ↑↓) | (●○○) | (○) | (left)
(↓↑, ↑↑) | (○●●) | (●) | (right)
(↑↓, ↓↓) | (●○●) | (●) | (right)
(↑↓, ↓↑) | (●○○) | (○) | (left)
(↑↓, ↑↓) | (○●●) | (●) | (right)
(↑↓, ↑↑) | (○●○) | (○) | (left)
(↑↑, ↓↓) | (●○○) | (○) | (left)
(↑↑, ↓↑) | (○●●) | (●) | (right)
(↑↑, ↑↓) | (○●○) | (○) | (left)
(↑↑, ↑↑) | (○○●) | (●) | (right)
Effectively, the original box, the interpreting light, and the robot are now a big ‘box’, whose ultimate output is the robot’s behavior. It’s this that is interpreted as yielding a certain computation. So the robot changing its behavior would be changing the output while keeping the input the same—that is, changing the wiring of the ‘box’. Taken back to the original box, it would mean changing the lamp pattern that arises, given a certain set of switch flips.
If that’s confusing, go back up one level, to the ‘interpreter’, and change it so that the lamp lights up in the complementary cases. Has the interpreter now changed its interpretation? No, it’s just been wired up differently; in particular, it can still, depending on the interpretation of the box, be considered as an ‘even’ or ‘odd’-interpreter.
And if you’re claiming that this change in hardware, in behavior, is the change in interpretation: then we lose again what computation is about. Once more, computation deals in things like sums and square roots, which symbols are interpreted as representing. As I put the matter above:

Now, this isn’t a computation. It’s just a physical behavior. If you say that that’s a computation, then everything computes, but what it computes is just its own behavior, and the challenge of getting a mind to come from the *computation *of the brain is the same as the challenge of explaining how mind could come from the *behavior *of the brain simpliciter, without any notion of ‘computation’ in the middle. These are the symbols; the computation—the computable function realized in the computation—is defined in terms of what these symbols represent.
The lamp pattern is not the output of the box’s computation; what that pattern is interpreted as is.

The light does come on in the same circumstances, but the change in the the interpretation that the robot is doing causes the robot to behave differently.
What determines the behavior of the robot is just sheer physical causality. Take my boat-example from above:

Take, as an example, navigating a boat. If you turn the wheel one way, the rudder responds by changing its orientation appropriately. This might be implemented by a simple physical transmission of steering-action to rudder-orientation. The rudder does not interpret the wheel’s turning, it is simply influenced by it, through mere physical causality.
Now, the same holds if the orientation of the wheel is measured by a sensor—say, producing a positive voltage in proportion to the amount of turning if it’s turned one way, and a correspondingly negative amount if it’s turned the other way—and that voltage then sent to engage some appropriate servo turning the rudder: it’s still, at best, a short-hand way of speaking to claim that the rudder interprets the signal of the wheel; but what goes on is still merely physical causality—the turning causes a certain voltage to be applied, which causes motors to move, which causes the rudder to turn.
Then say you replace the wheel by a simple light-detection mechanism, perhaps something that responds to brightness (light above a certain baseline) by producing a certain positive voltage in proportion to the amount of light, and darkness (ambient light below a certain baseline) by producing a certain negative voltage. This will then turn the rudder in the same way as the wheel did (it cares not about what provides the voltage), and the ship will move towards where the light’s just right, and away from the darkness (or the other way around, as the case may be).
Is this due to the computation being implemented by the system of sensor and rudder? Well, no: it’s just again sheer physical causality. It moves towards the light because the presence of too much or too little light causes a certain voltage, which causes a certain servo movement, which causes a certain rudder turning; there’s no need to talk about anything interpreting anything else, much less computation, at all.
Any movement a robot performs can be explained in just the same sense (it’s just that the explanation will be much more complicated), as can the production of noises or images.
But the mere physical causality isn’t the computation; it’s just what governs the connection between the symbols of the system (in other words, it’s purely syntactical—like the rules that govern which combinations of words yield admissible sentences, which don’t suffice to single out which ones yield meaningful sentences). If we were to reduce computation to this causality, then computationalism simply wouldn’t help at all in explaining the mind: computationalism tries to answer the question ‘How does the behavior of the brain produce the mind?’ by ‘Because it instantiates a certain computation’; but if that computation just is the behavior of the mind, that’s not actually telling us anything.

But it’s NOT enough to claim that whatever a brain does cannot be done by a noncomputational method. That “only” in the quoted sentence wasn’t italicized by accident.
I realize that, which was why I tried to head off that particular confusion—regrettably, it seems without success. But again, it’s sufficient to claim that for everything that the brain does, there exists a computational equivalent—which is entailed by the idea that the brain could be simulable in all of its functions.
For suppose that you have some simulation of a brain. Then, in that simulation, whatever process brings about the self-interpretation of the brain must be done via computation (since everything in a simulation is done by computation). But then, the argument obtains: for if the simulated brain is to use a computation to interpret itself as computing, then that computation must itself be interpreted, and so on. But this means that the simulated brain can’t self-interpret, while the non-simulated brain can. Which means that there’s something that a brain can do, that a simulated brain can’t. Thus, there is something among the processes of a brain that can’t have a computational equivalent.
Hence, you need not hold that the interpretation the brain does on itself can only be done computationally for my argument to obtain; you merely need to hold that the brain can be simulated, and that that simulation is equivalent to the real deal—for then, everything the brain can do (whether it is computational of not) has a computational equivalent.
Also, you might recall that early on, you very explicitly dismissed the possibility of anything non-computational:

It eludes me why anybody would even want their minds to be “non-computational” - doesn’t that just mean that it doesn’t work in a rational or coherent manner? That it’s totally random? My thoughts happen for reasons, thanks very much. And even if brains do include some small amount of randomity, computers can simulate randomity, so no problems there. Whatever a wibble does, however a wibble works, it works somehow, and that “somehow” is a process and that process can be imitated and simulated. Doesn’t matter if the wibble is material or supernatural, that’s still the case.
According to the above, every process, no matter how it works, must be computational, by working ‘somehow’.
If, on the other hand, you’re now agreeing that what a brain does must include some non-computational aspect, then I’ll just heartily agree, since that’s the position I’ve been arguing for from the beginning.

The following statements are NOT equivalent:
Interpretation can be done by both computational things and noncomputational things.
Interpretation can’t be done by computational things.
They aren’t equivalent, but the equivalence isn’t needed—and in fact, that computational things can do interpretation is a prerequisite of the infinite regress (after all, that regress obtains because a physical system is interpreted by a computational system as computing, for which this second system must be interpreted as computing; and if the second system is interpreted as computing by a further computational system, then that, too, must be interpreted as computing, and so on—either to infinity, in which case, nothing gets interpreted ever, or to something non-computational, in which case, the regress bottoms out and the entire chain becomes a definite set of computations happily interpreting one another).
God, what an obvious error. I had no idea that your argument was literally intentionally based on something so obviously wrong.
See, this sort of thing always puzzles me. By saying that I’m so blindingly obviously wrong, you’re essentially saying that it took you nearly 500 posts to spot the blindingly obvious—do you really want to say that about yourself?
Not according to your own definitions. It’s not the case that computational systems need to be interpreted before they can do things such as interpreting - your definitions explicitly say that that’s not the case, because how a system is being interpreted has no impact whatsoever on what the system is doing. That’s right in your definitions.
It does have an obvious impact on whether it’s computing; that is, after all, rather the point. No interpretation (whether by a computational or a non-computational system), no computation. No interpretation, only physical behavior, and physical behavior isn’t computation.
"Hey, I’m not the one who said “It’s precisely the fact that all interpretations are on equal ground that’s the problematic bit (for computationalism, that is), since it means that there’s no objective fact of the matter regarding what computation a system instantiates—in opposition to the fact that there’s an objective fact of the matter regarding whether it’s conscious.” It’s not my fault that you made such an absurd statement.
Huh? What do you think is absurd about that statement? After all, I’ve given an explicit example where one can validly differ about whether a system computes a given function; but of course, one can’t validly differ about whether a system is conscious. So right there, consciousness and computations are put on different ground.
Of course, this is only true because your back-assward definition of “computation” is structured such that whether a computational system is doing computation has literally no relation whatsoever to what the “computational system” is actually doing.
OK, so I take it you’re now (back to) claiming that I’m not computing f using my box—because if I am, then it’s simply true that whether a system computes is dependent on its interpretation. But then, what is it I’m doing? And here—like so many times before—the two options are: either computation, or not. If I’m doing computation, then computing f must be possible; if I’m not, computationalism is already dead and gone.
BZZZT! You can’t use legerdemain to equate computations and noncomputational behaviors and then base an argument on me making that error. Fail!
OK, so it wasn’t you who said:

The brain is physical and follows physical rules. It can therefore be simulated, by simulating those physical rules. Doing that in an accurate simulation will necessarily give you the same emergent behaviors, because causality. Which means you’ll get a mind simulated in the computer. Pretty straightforward, give or take the unbelievably massive amount of storage and processing power it will take to simulate the behavior of that much physical mass in detail. It certainly is theoretically possible, though.
Because that’s an explicit claim that everything that a brain does, can be done by computation; thus, that every behavior of the brain has a computational equivalent. And that’s all that’s needed: because if that were the case, then a simulated brain, performing only computations, must be able to self-interpret in the same way as a physical brain does; but that’s blocked by the regress.
Again, there is simply no need for any claim on your part that computation is necessary for interpretation, it merely needs to be sufficient—that is, every case where interpretation happens can be realized by computation. Given that claim, we can just replace every point of interpretation by the equivalent computation, say, by packing the whole shebang into a simulation, and then we have a world in which every interpretation is done by computation; but this world runs into the infinite regress.
If you’re saying that every instance of interpretation can be performed by some computation, then that entails that there’s a possible world where it is performed by computation (the simulation of the former world); but that world falls prey to the regress, and hence, there is no such world. Consequently, it can’t be true that every instance of interpretation can be performed by some computation; ultimately, the regress must bottom out in something noncomputational.

Of course, by your own definitions when any system a) does interpretation of some thing, and b) the thing that it’s interpreting was produced by itself, then that interpretation is being done by a computation, because according to the definition of “computation”, the system now qualifies.
No. Say, a system instantiates a noncomputational process N realizing an interpretation I interpreting itself as computing C; then, all that’s happening is that the system has both noncomputational and computational aspects. (C could, after all, just be a computation of successive digits of pi, which would not realize any interpretation of the system.)

This comes at least close to equivocation on the notion of ‘bit strings’ (which is why I’d prefer not to use that notion, and rather, if we’re talking about the physical behavior, talk in terms of ‘switch up’ and ‘lamp on’). If the bit strings are just shorthand for switch states (etc.), then the table doesn’t describe any computation at all, since it just describes the physical behavior of the system, and computations aren’t physical behaviors (any more than the meaning of a word is the word itself). If the bit strings, on the other hand, are meant to be the abstract logical entities that are merely denoted by the symbols ‘0’ and ‘1’—say, the elements of the Galois field GF(2)—, then you’ve just taken an arbitrary interpretation, and propose now that this is the one true interpretation, while in fact others are possible.
The state of the lamps is not what’s being computed, but neither is it the infinity of all possible variants of f. What’s being computed is the thing defined by the particular arrangement of adders in the box, conveniently expressed in my table of bit values. You continue to claim that the bit values are arbitrary, and that the physical specification of the switch/light implementation is also arbitrary, and both statements are correct, but you neglect to mention the key thing that is not arbitrary: the particular relationship between the switch definitions, the light definitions, and the bit values in the computation table. They have to reflect what the box actually does, which means that any set of definitions that is actually correct in describing the box behavior isomorphically maps to every other. It means that defining one constrains how you must define the other, and taken together they all define the same computation by the very nature of the fact that they all produce the same box.
For the sake of both our sanities I suggest that this be last go-round on this. Go ahead and have the last word if you want.

This is just begging the question against my argument, in that it assumes that it’s possible for the computer to uniquely instantiate F. But this unique instantiation is what my argument shows doesn’t exist.
Well, no, but I suspect that somehow we’re talking past each other. My position here is simple. There is no ambiguity required to observe that the robot is behaving correctly and doing intelligent things. This is not an empty appeal to complexity but a demonstration of a final interpretative layer, robotic actuators, that create an incontrovertible objective real-world phenomenon, as per my earlier discussion. And the robot is working correctly because lower interpretive layers, like the bit-position values interpreted by the ALU, and the unique (unambiguous) values representing addition returned by the programmatic function F, are fixed within the system. Any other argument to me is just academic because, as GIGO mentioned, Eppur si muove. Or as Sybil said to Basil Fawlty when he proved there was no way to make a Waldorf salad because the chef didn’t have the ingredients, “But there it is!”.

It will not return any values of a function. It will produce symbols that, in order to be connected to values of a function, need to be interpreted accordingly. Again, the numeral ‘5’ and the number it denotes are different things—the former being, in this instantiation, composed of a particular array of pixels, the latter being an abstract object. The computer will, at best, output the numeral ‘5’; but what number that numeral refers to depends, once more, on the interpretation: on how it’s understood, on what abstract object it’s taken to be, on what it means.
It will return values that make the robot work. See immediately above.

That’s still the mistaken claim that the state of the lamps is what’s being computed by the box; but it’s not—the state of the lamps (the behavior of a robot, the playing of music, the showing of a video) is just the physical behavior of the system.
I did not claim that the state of the lamps was the computation – see my first remark in this post. It is, however, a consequence of the computation.
To expand on this a bit, the box has only a very primitive single layer of interpretation, that which is implemented in the binary adders, and so is correspondingly open-ended. Nevertheless, one can say that its end product, such as it is, is the illumination of lights according to the mapping defined by the table of bit values. There is wide scope here, because of the primitiveness of its interpretation-fixing, to make all kinds of different interpretations of what it’s doing, but that’s not relevant to defining what I regard as the actual computation.
And just a word on the notion that more complex systems, having more computations, have even more scope for alternate interpretations, which might help shed some light on how I see this issue. I maintain it’s the opposite, because more complex systems will generally have more interpretational layers built in, so there are far more constraints and correspondingly fewer options on meaningful alternative interpretations. If the computation is such that a robot demonstrates a specific set of obstacle-avoidance and navigational capabilities, for instance, then that is the end product of the computation as an empirical phenomenon. We might wonder what sorts of algorithms it uses to do so, but that’s a different matter. A robot that navigates is a robot that navigates: there’s nothing about this behavior that requires interpretation in the sense of resolving a computational ambiguity by means of another interpretive layer, as if, instead of navigating its way around the room, there is an equally valid account that perhaps the robot was really computing the value of pi.

Any movement a robot performs can be explained in just the same sense (it’s just that the explanation will be much more complicated), as can the production of noises or images.
Not quite, because the simple boat-steering systems you described are not computational and thus fundamentally different. But, yes, the music and video systems are computational, the robot is, and indeed the same argument can be made of a boat-steering system that, instead of steering towards a light, self-navigates an intricate canal system, or one that steers the boat through the proper navigation channels into a desired port using GPS information. Or, as you say, my example smart robot. Or a human being. In the end, it’s all “just the physical behavior of the system”. What else could it be? But before you go off on your “behavorist” bandwagon, understand that one can also be a functionalist like Turing and most cognitive scientists and interest oneself in the associated processes and states the underlie the behaviors.

Maybe it’s just a brute difference of interpretation, but to me, the idea that the production of consciousness could be due to the mere shuffling around of symbols—without any interpretation, any meaning attached to them—strikes me as just the same idea as that the production of, say, rain could be just due to the right symbols being manipulated in the right way—that is, saying the right spell. It’s just the fundamental confusion that the symbols themselves, rather than what they mean, is important; that the universe, say, listens to the right Latin phrase in some special way.
Duly noted that you meant “intuition”, not “interpretation”. I didn’t want to get into the quagmire of consciousness, although I do believe it arises the same way (again, what else could it be?). Marvin Minsky thought the whole idea of consciousness was a vastly overrated delusion rather than anything tangible or any kind of valid self-knowledge. Anyway, no, intentional mental processes, at least, are not thought to be shuffling around symbols without meaning or interpretation, but rather that the interpretations are made by the mental processes themselves, in a manner analogous to how the interpretational layers discussed before fix the meanings of symbols. I note that you still deny this, but clearly an ALU in a trivial way fixes the semantics of bits in a computer register in a way that your box does not, and actual computer programs – and, many believe, human cognitive processes – fix the semantics of the symbols they use.

The state of the lamps is not what’s being computed, but neither is it the infinity of all possible variants of f. What’s being computed is the thing defined by the particular arrangement of adders in the box, conveniently expressed in my table of bit values. You continue to claim that the bit values are arbitrary, and that the physical specification of the switch/light implementation is also arbitrary, and both statements are correct, but you neglect to mention the key thing that is not arbitrary: the particular relationship between the switch definitions, the light definitions, and the bit values in the computation table.
And both my functions f and f’ instantiate exactly the same relationships; hence, if you claim that those relationships individuate their elements (a claim, by the way, known to be false in this form: this is Newman’s objection to Russell’s theory that (causal) relations are all we know of the world), then it follows that f and f’ are the same computation—despite the fact that they’re manifestly different computable functions, and computable functions are generally thought to formalize the notion of computation. So anything of that sort just flies in the face of theoretical computer science.
They have to reflect what the box actually does, which means that any set of definitions that is actually correct in describing the box behavior isomorphically maps to every other. It means that defining one constrains how you must define the other, and taken together they all define the same computation by the very nature of the fact that they all produce the same box.
And once again, this sort of view will yield, by unifying all of the computations compatible with a given behavior, a notion of computation in which it’s just another name for behavior. Which then falls short of explaining how anybody ever computed a square root or a sum. But, I notice that you seem to have missed that (I had thought, rather prominent) part of my last post.
There is no ambiguity required to observe that the robot is behaving correctly and doing intelligent things. This is not an empty appeal to complexity but a demonstration of a final interpretative layer, robotic actuators, that create an incontrovertible objective real-world phenomenon, as per my earlier discussion.
If that were the case, it wouldn’t be necessary, because then, the lamps of the box would likewise yield a ‘final interpretive layer’, since they’re as much of an ‘incontrovertible objective real-world phenomenon’. Lighting up a lamp is as much of a real-world behavior as moving an actuator is; but, like any real-world behavior (including that of actuators), it fails to fix the interpretive layer. Otherwise, this is exactly the position that claims that the lamp-lights are the outputs of the computation—and then, nobody has ever computed a square root (lamp lights aren’t square roots).
Any other argument to me is just academic because, as GIGO mentioned, Eppur si muove.
Sure, but the point is simply that just that it moves doesn’t mean that it computes. Movement can—in every case—be explained simply by physical causal forces: light hitting detectors, which produce voltages, which produce various other voltages in response, which are transmitted by wires, which switch on servos, which move actuators. Computation would be what each of these voltages, each of these signals, means; and that’s not fixed by how the actuators move, anymore than it is fixed by what lamps light up (the two, again, being the same thing).
It will return values that make the robot work.
No. It will return voltages that cause the robot to behave in a certain way. Voltages are not values. Neither the symbol ‘5’, nor the symbol ‘V’, nor the symbol ‘101’, nor the symbol ‘five’ is the number five; rather, that number—that value—is what they all refer to. You keep conflating the two, and that’s the sole root of your position. But once more, the map, the symbol, simply isn’t the territory.
To expand on this a bit, the box has only a very primitive single layer of interpretation, that which is implemented in the binary adders, and so is correspondingly open-ended.
The binary adders are only binary adders if they’re interpreted in the right way. I get that this is hard to see; these sorts of symbols, like air, are so all-pervasive that it’s easy to consider them completely transparent; but in fact, the symbols are wholly opaque (which Magritte correctly recognized as encompassing The Human Condition).
Maybe it helps to think of computation in terms of just throwing out punch cards, like in the good old days. There, what a given punch card represents is less ‘obvious’, and it may be easier to see that there’s always interpretational ambiguity connected to them.
And just a word on the notion that more complex systems, having more computations, have even more scope for alternate interpretations, which might help shed some light on how I see this issue. I maintain it’s the opposite, because more complex systems will generally have more interpretational layers built in, so there are far more constraints and correspondingly fewer options on meaningful alternative interpretations.
It’s just a question of combinatorics. My box supports 16 input states and 8 output states, which yields 8[sup]16[/sup] possible interpretations of the type I’ve been considering. Adding more input or output states will just increase that. For instance, adding just the one ‘interpreter’ light extends the number of outputs to 16, since with every of the original outputs, the new light can signify either 1 or 0, thus yielding 16[sup]16[/sup] possible functions.
A robot that navigates is a robot that navigates:
Yet still, its navigational capacities remain entirely at the level of behavior, explicable in some (vastly complicated) description of what voltages causes which switches to open or close.
Not quite, because the simple boat-steering systems you described are not computational and thus fundamentally different.
But when do they do computation? I could, for instance, use more than one light sensor—say, one to port, and one to starboard, and have some circuit compare the voltage levels, to send an aggregate voltage pattern to the rudder. And so on, always adding more discriminatory capacity, until I have a boat that navigates in just the same way a human does. Have I, at any point, left the level of pure behavioral describability? No, of course not.
But before you go off on your “behavorist” bandwagon, understand that one can also be a functionalist like Turing and most cognitive scientists and interest oneself in the associated processes and states the underlie the behaviors.
Functionalism and behaviorism are pretty different things, though—notably, computationalism is a subspecies of functionalism, where the functions are generally taken to be computations.
Duly noted that you meant “intuition”, not “interpretation”. I didn’t want to get into the quagmire of consciousness, although I do believe it arises the same way (again, what else could it be?).
I don’t know, but that doesn’t mean it can’t be something I don’t understand. ‘What else could it be’ is a bad argument for phenomena that have resisted explanation for so long, because maybe, it’s simply something we don’t know yet.
Anyway, no, intentional mental processes, at least, are not thought to be shuffling around symbols without meaning or interpretation, but rather that the interpretations are made by the mental processes themselves, in a manner analogous to how the interpretational layers discussed before fix the meanings of symbols. I note that you still deny this, but clearly an ALU in a trivial way fixes the semantics of bits in a computer register in a way that your box does not, and actual computer programs – and, many believe, human cognitive processes – fix the semantics of the symbols they use.
Again, if it can clearly be done in a trivial way, you should be able to exhibit a simple example of it—as in, fix the interpretation of my box.

So, you’re saying that this sort of requirement, to have an argument to ground each argument, would kinda trap you in a, hmm, let’s call it ‘neverending loop’, or perhaps ‘boundless feedback’, and thus, never actually allow you to make any argument at all? Hmm, interesting point!
Wow. In the most charitable terms possible - your reading comprehension sucks.
(And I’m saying that literally - you are being so sleazy and rhetorically dishonest that any proper description would only be allowed in the pit.)

This isn’t a change in interpretation; it requires a change in wiring, in hardware. The robot moving is like the light coming on in the example above. For convenience:
Let’s try to keep this simple.
There is a box. It’s called B. It functions deterministically, based on wiring.
There is a robot. It’s called R. It functions deterministically, based on wiring.
R is interpreting B.
By the back-asswards definitions, this means that B is a computational device. R has not been shown to be a computational device, because nobody is interpreting it.

Effectively, the original box, the interpreting light, and the robot are now a big ‘box’, whose ultimate output is the robot’s behavior. It’s this that is interpreted as yielding a certain computation.
Bullshit. There is no stated observer that exists to “interpret” the robot’s behavior, so no interpretation can occur. There is no interpreter interpreting the robot and, per the dumbass defintion of computation, there is no external evidence that the robot is a computational device.
And no, you don’t get to redefine the scenario to erase the scenario. I get that the scenario starkly demonstrates that your argument is garbage and that you wish you could pretend it doesn’t exist, but tough toenails.
There is a box. It’s called B. It functions deterministically, based on wiring.
There is a robot. It’s called R. It functions deterministically, based on wiring.
R is interpreting B.

And if you’re claiming that this change in hardware, in behavior, is the change in interpretation: then we lose again what computation is about. Once more, computation deals in things like sums and square roots, which symbols are interpreted as representing. As I put the matter above:
What nonsense! There is nothing about computation that says it’s restricted to “things like sums and square roots”. That’s ridiculous. Computation is about producing an output that can be interpreted as having meaning. Any meaning. If a box’s output is interpreted as a symbol meaning “go right”, then that’s not invalid due to not being a number.
If you’re going to special plead, try at least to be even slightly credible, would you? God!

I realize that, which was why I tried to head off that particular confusion—regrettably, it seems without success. But again, it’s sufficient to claim that for everything that the brain does, there exists a computational equivalent—which is entailed by the idea that the brain could be simulable in all of its functions.
For suppose that you have some simulation of a brain. Then, in that simulation, whatever process brings about the self-interpretation of the brain must be done via computation (since everything in a simulation is done by computation). But then, the argument obtains: for if the simulated brain is to use a computation to interpret itself as computing, then that computation must itself be interpreted, and so on. But this means that the simulated brain can’t self-interpret, while the non-simulated brain can. Which means that there’s something that a brain can do, that a simulated brain can’t. Thus, there is something among the processes of a brain that can’t have a computational equivalent.
Hence, you need not hold that the interpretation the brain does on itself can only be done computationally for my argument to obtain; you merely need to hold that the brain can be simulated, and that that simulation is equivalent to the real deal—for then, everything the brain can do (whether it is computational of not) has a computational equivalent.
Your fallacious argument doesn’t get more valid with repetition.
You assert that simulated brain can’t self-interpret, but that doesn’t follow from the muddled argument that precedes it.
Let’s look at some things we know.
- Anything can be self-interpreting, just by looking at and interpreting its own outputs.
This includes ‘internal outputs’ (ie: stored data), since you have helpfully stated that if you write down the output of a system and then later interpret that stored output then the computation is done by the device that stored the data and not the paper the data was written on; you probably don’t realize this but this means that literally every computer program that stores a value in memory and then reads it back again is doing self-interpretation and is thus doing computation. (And yes, that’s all computers and computer programs. Literally all of them.)
- When a brain is being wholly simulated at the physical level, the operating processes of the simulated brain are actually not being interpreted by the simulator as being a conscious entity.
This requires, like, two moments of thought to understand, but the computational processes of the simulator are interpreting the simulation’s stored data as the positions and velocities of particles - and only as particles. For a very direct analogy, look at the screen you’re reading this on. The video driver is interpreting the data the processor is giving it and lighting up the pixels on the screen in various colors - but your video driver is not interpreting the images it presents. The video driver is not reading this post alongside you; it’s merely placing lit pixels next to each other and if you interpret them as interacting, then that’s your interpretation, not its.
This means that no physical-level is interpreting the simulated brains as being brains with minds - they don’t even know the simulated particles are forming a brain at all. A physical-level simulation doesn’t even know what a brain is. The brain activity itself is emergent, not interpreted. (Actually the fact that there are coherent objects at all is emergent - as far as the simulation is concerned no particle interacts with anything but its neighbors and the ones it’s pulling on with gravity and such.)
This means that the simulated brain starts out on exactly the same footing as a physical brain - nothing external to the emergent brain system is interpreting the emergent brain system as being a system at all.
- Your “But this means that the simulated brain can’t self-interpret” assertion continues to be based on nothing more substantial than wishful fever dreams. After all, as noted in point 1, any system is capable of self-interpretation if it does the simple act of interpreting anything it’s outputted previously.

Also, you might recall that early on, you very explicitly dismissed the possibility of anything non-computational:
According to the above, every process, no matter how it works, must be computational, by working ‘somehow’.
I will freely concede that way back then I hadn’t come to fully comprehend the idiocy of the definition you’re using for “computation”. Back then I thought it said something about what the object was doing - now I know better.

If, on the other hand, you’re now agreeing that what a brain does must include some non-computational aspect, then I’ll just heartily agree, since that’s the position I’ve been arguing for from the beginning.
I do not agree that the brain must include some non-computational aspect. I will however concede that every brain that has not existed since the dawn of time was once a collection of particles that didn’t self-interpret and thus didn’t compute anything. At some point the particles of the brain started self-interpreting, at which point they simultaneously began doing computation.

They aren’t equivalent, but the equivalence isn’t needed—and in fact, that computational things can do interpretation is a prerequisite of the infinite regress (after all, that regress obtains because a physical system is interpreted by a computational system as computing, for which this second system must be interpreted as computing; and if the second system is interpreted as computing by a further computational system, then that, too, must be interpreted as computing, and so on—either to infinity, in which case, nothing gets interpreted ever, or to something non-computational, in which case, the regress bottoms out and the entire chain becomes a definite set of computations happily interpreting one another).
You don’t really “get” self-interpretation, do you? Despite the fact you assert brains can do it, you don’t seem to understand that the term means something.
If a system happens to be self-interpreting, then it’s computational. By definition. The definitions of “interpret” and “compute” are both satisfied by a single interpreter, and in self-interpretation the computing thing is its own interpreter. So this nonsense about infinite regress being required is absurd.
On the other hand if you are claiming that this system can’t be started without a temporal predecessor that’s not computing, then I agree. However in the case of something that’s about to be self-interpreting, then the (proto-)computational system itself can easily and handily provide that interpretation as it boots up, and the only reason it doesn’t count as a computational system prior to that point is because the definition we’re using for “computational system” here is back-assward.

See, this sort of thing always puzzles me. By saying that I’m so blindingly obviously wrong, you’re essentially saying that it took you nearly 500 posts to spot the blindingly obvious—do you really want to say that about yourself?
It says I thought too highly of you. I mean, I admitted just above that it took me a while to twig to how stupid and useless your definition of “computation” is. (And yes, I realize you didn’t invent it. Philosophers can be stupid too. In fact, based on my experience with them, being stupid helps.)
This also gives me the opportunity to say something I’ve been tempted to mention for weeks, to explain how I’ve been coming up with so incredibly many different ways to rip your argument apart: It’s an interesting property of fallacious arguments that the error in them can usually be stated in lots of different ways. It’s like bubbles under the wallpaper; you can push the bubble down with some twisted argument or another, but that argument will inevitably have its own flaw, because when your argument can’t be written as a valid formal logical argument there is a reason. And because formal logic is all about the manipulation of symbols, you can flip things around and run things backward and rephrase things in lots of different ways, causing the errors to express themselves in various different ways, but never going away.

It does have an obvious impact on whether it’s computing; that is, after all, rather the point. No interpretation (whether by a computational or a non-computational system), no computation. No interpretation, only physical behavior, and physical behavior isn’t computation.
Yep, that’s that definition. (Yawn.)

Huh? What do you think is absurd about that statement? After all, I’ve given an explicit example where one can validly differ about whether a system computes a given function; but of course, one can’t validly differ about whether a system is conscious. So right there, consciousness and computations are put on different ground.
You have NEVER given a reason why people can’t validly differ on whether a system is conscious. Heck, the fact that solipsism is a thing -a non-disprovable thing- proves that philosophy as a whole has failed to given a reason why people can’t validly differ on whether a system is conscious!
Your special pleading on behalf of human brains being conscious is especially absurd given that your whole damn argument is that you could see a lifelike robot that was indistinguishable from a human in every way including what it says and does, and you think you can differ about whether that is conscious, despite it being impossible to interpret its “output” differently from that of a human.

OK, so I take it you’re now (back to) claiming that I’m not computing f using my box—because if I am, then it’s simply true that whether a system computes is dependent on its interpretation. But then, what is it I’m doing? And here—like so many times before—the two options are: either computation, or not. If I’m doing computation, then computing f must be possible; if I’m not, computationalism is already dead and gone.
Read your own posts again, dude. The definitions are all there.
But I’ll help, briefly. You can interpret the box as computing f, and that will mean that the box is “computing f”, because all that means is “some dude somewhere thinks the box is interpreting f”. It doesn’t say anything at all about what the box is actually doing, and if you stop interpreting the box as doing f, it will have not one whit of effect on anything else the box is doing, including being interpreted as doing f’ by somebody else or interpreting itself as being conscious. This “computation” shit just means “somebody else is doing something, and the “computational device” doesn’t know or care or do anything differently as a result.”

Because that’s an explicit claim that everything that a brain does, can be done by computation; thus, that every behavior of the brain has a computational equivalent. And that’s all that’s needed: because if that were the case, then a simulated brain, performing only computations, must be able to self-interpret in the same way as a physical brain does; but that’s blocked by the regress.
There is no regress.

Again, there is simply no need for any claim on your part that computation is necessary for interpretation, it merely needs to be sufficient—that is, every case where interpretation happens can be realized by computation. Given that claim, we can just replace every point of interpretation by the equivalent computation, say, by packing the whole shebang into a simulation, and then we have a world in which every interpretation is done by computation; but this world runs into the infinite regress.
I do not claim that computation is necessary for interpretation, you haven’t shown that self-interpretation is impossible (indeed, you’ve asserted the opposite), and there is no infinite regress.

If you’re saying that every instance of interpretation can be performed by some computation, then that entails that there’s a possible world where it is performed by computation (the simulation of the former world); but that world falls prey to the regress, and hence, there is no such world. Consequently, it can’t be true that every instance of interpretation can be performed by some computation; ultimately, the regress must bottom out in something noncomputational.
This is only true if there’s no such thing as self-interepretation, and you have asserted that there is such a thing as self-interpretation.

No. Say, a system instantiates a noncomputational process N realizing an interpretation I interpreting itself as computing C; then, all that’s happening is that the system has both noncomputational and computational aspects.
Not by your definitions; if the system’s output is being interpreted then it’s a computational system through and through.
Well, give or take that unsupported special pleading you do constantly, but by your definitions all that shit is invalid and can be safely ignored.

(C could, after all, just be a computation of successive digits of pi, which would not realize any interpretation of the system.)
Oh look a bait and switch. :rolleyes: I was explicitly talking about a self-interpreting system, so you switch that out for an explicitly non-self-interpreting system, because self-interpreting systems make you uncomfortable due to the fact they blow your argument into smoldering shreds.
And with that, I’m pleased to announce that we’ve spanned the distance from your long vacation to mine! I probably won’t be posting here much (if at all) for the next couple of weeks, so feel free to carry on with this thread without me! Or declare victory and drop the mike, if you wish - pretend that I haven’t said what I’ve said, or whatever pleases you. If you get lonely you can refer to my existing posts; they’re all still there.
Peace out! <mike drop>

(And I’m saying that literally - you are being so sleazy and rhetorically dishonest that any proper description would only be allowed in the pit.)
Once again, the force of your words merely reveals the corresponding lack of force of your arguments.
Let’s try to keep this simple.
There is a box. It’s called B. It functions deterministically, based on wiring.
There is a robot. It’s called R. It functions deterministically, based on wiring.R is interpreting B.
By the back-asswards definitions, this means that B is a computational device. R has not been shown to be a computational device, because nobody is interpreting it.
The claim was that the robot is performing a definite computation without needing to be interpreted, which we know because it behaves in a certain way. This is not the case: the behavior of the robot is insufficient to fix the computation it performs (and hence, any interpretation it might perform), just in the same way that the behavior of the box isn’t sufficient to fix what it computes—this is, after all, where the need for interpretation comes from.
There is no interpreter interpreting the robot and, per the dumbass defintion of computation, there is no external evidence that the robot is a computational device.
If you’re saying that what the robot does isn’t computation, then I’m fine with that—after all, I’ve early on stated that I believe it’s possible to create conscious machines, just that they wouldn’t be conscious by executing the right sort of program.
What nonsense! There is nothing about computation that says it’s restricted to “things like sums and square roots”. That’s ridiculous. Computation is about producing an output that can be interpreted as having meaning. Any meaning.
I have never claimed that computation is restricted to these things, but that you can compute them. Hence, the notion of computation that casts it just in terms of physical behavior fails to capture what computation is about.
You assert that simulated brain can’t self-interpret, but that doesn’t follow from the muddled argument that precedes it.
It follows from the fact that all that the brain has available to it is computation, but that computation itself needs to be interpreted, leading to an infinite regress.
- Anything can be self-interpreting, just by looking at and interpreting its own outputs.
However, nothing can self-interpret by performing a computation, if there is nothing non-computational to provide (ultimately) the interpretation that produces this computation.
This includes ‘internal outputs’ (ie: stored data), since you have helpfully stated that if you write down the output of a system and then later interpret that stored output then the computation is done by the device that stored the data and not the paper the data was written on; you probably don’t realize this but this means that literally every computer program that stores a value in memory and then reads it back again is doing self-interpretation and is thus doing computation.
And once more, not if the process by which it ‘reads back’ the data is supposed to be computational. Because then, in order for that process to be present, an appropriate interpretation needs to be performed, and without that interpretation, there’s no reading back of the data. I would’ve thought this should be abundantly clear by now, but I guess I should just not make such assumptions anymore.
It’s trivial to include this ‘reading back’ into my box. Say that the ‘interpreting’-light acts as a register, and, in the next step, the new state of the output-lamps depends on the input-switches and the ‘interpreter’. The behavior of the box then, for instance, becomes:
Switches | Interpreter (old) | Lamps | Interpreter (new)
------------------------------------------------------------
(↓↓, ↓↓) | (●) | (●●●) | (●)
(↓↓, ↓↑) | (●) | (●●○) | (○)
(↓↓, ↑↓) | (●) | (●○●) | (●)
(↓↓, ↑↑) | (●) | (●○○) | (○)
(↓↑, ↓↓) | (●) | (●●○) | (○)
(↓↑, ↓↑) | (●) | (●○●) | (●)
(↓↑, ↑↓) | (●) | (●○○) | (○)
(↓↑, ↑↑) | (●) | (○●●) | (●)
(↑↓, ↓↓) | (●) | (●○●) | (●)
(↑↓, ↓↑) | (●) | (●○○) | (○)
(↑↓, ↑↓) | (●) | (○●●) | (●)
(↑↓, ↑↑) | (●) | (○●○) | (○)
(↑↑, ↓↓) | (●) | (●○○) | (○)
(↑↑, ↓↑) | (●) | (○●●) | (●)
(↑↑, ↑↓) | (●) | (○●○) | (○)
(↑↑, ↑↑) | (●) | (○○●) | (●)
(↓↓, ↓↓) | (○) | (●●○) | (○)
(↓↓, ↓↑) | (○) | (●○●) | (●)
(↓↓, ↑↓) | (○) | (●○○) | (○)
(↓↓, ↑↑) | (○) | (○●●) | (●)
(↓↑, ↓↓) | (○) | (●○●) | (●)
(↓↑, ↓↑) | (○) | (●○○) | (○)
(↓↑, ↑↓) | (○) | (○●●) | (●)
(↓↑, ↑↑) | (○) | (○●○) | (○)
(↑↓, ↓↓) | (○) | (●○○) | (○)
(↑↓, ↓↑) | (○) | (○●●) | (●)
(↑↓, ↑↓) | (○) | (○●○) | (○)
(↑↓, ↑↑) | (○) | (○○●) | (●)
(↑↑, ↓↓) | (○) | (○●●) | (●)
(↑↑, ↓↑) | (○) | (○●○) | (○)
(↑↑, ↑↓) | (○) | (○○●) | (●)
(↑↑, ↑↑) | (○) | (○○○) | (○)
So, the new state of the output-lamps then depends on the switches, and the state of the interpreter-lamp, which stores its state from the previous round. Has the computation become any more definite? Short answer: no. (Long answer: noooooo.) We’re just as free to use an arbitrary interpretation of switch- and lamp-states as before, and each of these interpretations will yield a different computation.
This requires, like, two moments of thought to understand, but the computational processes of the simulator are interpreting the simulation’s stored data as the positions and velocities of particles - and only as particles.
That doesn’t help at all, since a conscious state supervenes on the particle states, and thus, is only definite once that is definite. So there needs to be an interpretation to fix those particle states—which can, of course, not be supplied by anything inside the simulation, since then we’re just on off to infinity and beyond.
For a very direct analogy, look at the screen you’re reading this on. The video driver is interpreting the data the processor is giving it and lighting up the pixels on the screen in various colors - but your video driver is not interpreting the images it presents.
The video driver isn’t doing any more interpreting than the ‘interpreter’-light is; the video card makes certain pixels light up in response to certain voltage patterns, but there’s not interpretation necessary for that—it’s just the voltages that ultimately supply (or trigger the supply of) the current that lights up the pixels. Given that voltage pattern, those pixels will light up; this is in contradistinction to symbols that require interpretation, because there, given that set of symbols, multiple interpretations are possible, and equally appropriate. So this is, once again, merely a case of physical behavior, which can’t be all that computation comes down to (or, again, nobody has ever computed a square root).
The video driver is not reading this post alongside you; it’s merely placing lit pixels next to each other and if you interpret them as interacting, then that’s your interpretation, not its.
So this basically explicitly concedes my point. Because then, if a computer outputs the symbols—lights the pixels pattern—‘√9 = 3’, that means it hasn’t computed the square root of nine. Transferred to a brain, that means a brain never computes the mind (or its contents), but merely, neuron spiking patterns; but then, computationalism simply doesn’t constitute any progress on the problem of how mind is generated—it’s generated somehow from the behavior of the brain, but that’s not telling us anything.
This means that no physical-level is interpreting the simulated brains as being brains with minds - they don’t even know the simulated particles are forming a brain at all. A physical-level simulation doesn’t even know what a brain is. The brain activity itself is emergent, not interpreted.
And to be emergent, the physical level from which it emerges must be definite; but that definiteness, since we’re simulating it, can only come from an appropriate interpretation; if we stipulate that this interpretation is done by computation, then that computation, to be definite, again needs an interpretation; and so on and so on.
- Your “But this means that the simulated brain can’t self-interpret” assertion continues to be based on nothing more substantial than wishful fever dreams. After all, as noted in point 1, any system is capable of self-interpretation if it does the simple act of interpreting anything it’s outputted previously.
And, as shown copiously by now, that interpretation would itself need to be interpreted, if it’s down in a computational way (that is, based on symbolic manipulations). See the explicit example above.
I will freely concede that way back then I hadn’t come to fully comprehend the idiocy of the definition you’re using for “computation”.
If you disagree with my definition of computation, you’re free any time to propose one that you believe more adequately captures what computation means. You won’t find one that works, though—after all, computation just is the implementation of computable functions, and that’s all the definition I’ve been using.
I do not agree that the brain must include some non-computational aspect. I will however concede that every brain that has not existed since the dawn of time was once a collection of particles that didn’t self-interpret and thus didn’t compute anything. At some point the particles of the brain started self-interpreting, at which point they simultaneously began doing computation.
So either they started self-interpreting by doing a computation—then, we’re either off to the regress, or you’re again using your tired circular argument that the computation that’s being performed could supply the interpretation that makes the brain perform that computation. Or, the way it’s interpreting itself is noncomputational—but then, brains must include a noncomputational aspect. That’s literally the full set of possibilities, neither of which supports your stance.
You don’t really “get” self-interpretation, do you? Despite the fact you assert brains can do it, you don’t seem to understand that the term means something.
Entirely possible. Why don’t you just, beyond merely making tall claims, try to substantiate them for once by providing an example of a system that interprets itself? Because when I do it (see the above), it will still depend on how it’s interpreted. Of course, that’s just completely general, but I invite you to give it a shot (if only, of course, to see that it can’t possibly work).
If a system happens to be self-interpreting, then it’s computational. By definition. The definitions of “interpret” and “compute” are both satisfied by a single interpreter, and in self-interpretation the computing thing is its own interpreter. So this nonsense about infinite regress being required is absurd.
That once more just says that something computes, if it computes. Which doesn’t tell us anything about whether it computes.
On the other hand if you are claiming that this system can’t be started without a temporal predecessor that’s not computing, then I agree. However in the case of something that’s about to be self-interpreting, then the (proto-)computational system itself can easily and handily provide that interpretation as it boots up, and the only reason it doesn’t count as a computational system prior to that point is because the definition we’re using for “computational system” here is back-assward.
Again, then, just supply a better one! Be the change you want to see in the world!
This also gives me the opportunity to say something I’ve been tempted to mention for weeks, to explain how I’ve been coming up with so incredibly many different ways to rip your argument apart: It’s an interesting property of fallacious arguments that the error in them can usually be stated in lots of different ways.
Whereas you merely keep making the same error over and over. I had invited you to present an unambiguous debunking of my argument, by presenting a system that unambiguously implements f. But your failure to put your money where your mouth is—your repeated claims of the obviousness of how such a thing works, without anything actually substantiating the notion that it works at all—just shows that there’s no ‘there’ there.
And because formal logic is all about the manipulation of symbols, you can flip things around and run things backward and rephrase things in lots of different ways, causing the errors to express themselves in various different ways, but never going away.
Formal logic isn’t all about the manipulation of symbols. The symbolic level can make an argument valid, but only the semantic level can make it sound. Since you’re arguing from premises that have been variously shown to be wrong (when you’re bothering to argue at all, and not merely claiming things to be obvious), you simply fail at establishing what you claim to establish.
You have NEVER given a reason why people can’t validly differ on whether a system is conscious. Heck, the fact that solipsism is a thing -a non-disprovable thing- proves that philosophy as a whole has failed to given a reason why people can’t validly differ on whether a system is conscious!
Because, in order to validly differ, both the sentences ‘the system is conscious’ and ‘the system isn’t conscious’ must be capable of being true simultaneously—such as the sentences ‘the system implements f’ and ‘the system implements f’’. But they’re not: a system either is conscious, or it isn’t. This doesn’t mean that we must necessarily be able to know whether it’s conscious; so people can differ in their belief that a system is conscious, but if they do, only one of their beliefs can, in fact, apply to reality.
Consider the particle horizon of the universe. We’ll never know what’s beyond it; it could be that the universe is infinite, or it could be that it’s not. That we can’t establish that for certain, however, doesn’t mean that somebody who claims it’s infinite and somebody who claims it’s not are equally right about this—only one will be.
Your special pleading on behalf of human brains being conscious is especially absurd given that your whole damn argument is that you could see a lifelike robot that was indistinguishable from a human in every way including what it says and does, and you think you can differ about whether that is conscious, despite it being impossible to interpret its “output” differently from that of a human.
The robot will either be conscious, or not, whether I know that or not. But it won’t be conscious by virtue of instantiating the right computation, since I’ve proposed an argument that makes this impossible. Consciousness isn’t due to computation, since any attempt to claim that it is runs headlong into the infinite regress.
But I’ll help, briefly. You can interpret the box as computing f, and that will mean that the box is “computing f”, because all that means is “some dude somewhere thinks the box is interpreting f”.
No. It means that, if I want to know the value of f(x) for some x, all I need to do is input x in an appropriately symbolically coded form, in order to obtain f(x). That’s just the same thing as we do when we use a calculator, so anybody claiming that using the box in this way isn’t really computing f is saddled with claiming that we don’t use a calculator to compute square roots (but merely, to produce certain symbols). But due to this use, I acquire some new knowledge, for instance, the value of the square root of x. This has been computed, somehow: I didn’t know it beforehand.
It doesn’t say anything at all about what the box is actually doing, and if you stop interpreting the box as doing f, it will have not one whit of effect on anything else the box is doing, including being interpreted as doing f’ by somebody else or interpreting itself as being conscious.
And if it’s interpreting itself as being conscious, then, if whatever process supplies that interpretation is a computation, there must be an interpretation supplying that computation, in which case:
There is no regress.
There is a regress. Seriously, it’s, like, right there; I’ve explicitly written it down.
I do not claim that computation is necessary for interpretation, you haven’t shown that self-interpretation is impossible (indeed, you’ve asserted the opposite), and there is no infinite regress.
I have shown that self-interpretation by computation is impossible, from which it follows that self-interpretation must work in a noncomputational way.
This is only true if there’s no such thing as self-interepretation, and you have asserted that there is such a thing as self-interpretation.
No. The bit you quoted shows that self-interpretation can’t be performed by means that have a computational equivalent. Because if that were the case, we could imagine a world (a simulation), where all interpretation is performed via computations, by just replacing all instances of interpretation by their computational equivalent. But in this world, nothing ever gets interpreted at all. Since whatever a system computes is due to how it’s interpreted, whatever a system interprets (if it interprets by computation) is likewise due to how it’s interpreted. If the system is supposed to supply that interpretation (again, by computational means) itself, we’re back with the circular argument, which fails to establish anything about whether it computes, and hence, interprets, at all. If some other system is supposed to yield the interpretation (by computation), then that system will need to be interpreted itself, and so on.
Not by your definitions; if the system’s output is being interpreted then it’s a computational system through and through.
Nothing in my definitions entails that a system that’s interpreted as computing can’t at the same time perform noncomputational processes; indeed, I take it as obvious that they do, since all properties of a system that don’t boil down to structural properties are noncomputational (but, that’s a rather more involved notion). Being interpreted in some particular way, as you by now have correctly grasped, doesn’t change anything about the system; so it doesn’t change anything about its noncomputational properties.
Oh look a bait and switch. :rolleyes: I was explicitly talking about a self-interpreting system, so you switch that out for an explicitly non-self-interpreting system, because self-interpreting systems make you uncomfortable due to the fact they blow your argument into smoldering shreds.
Then, you’re just back to arguing the circular position that a system P could realize computation C by means of interpretation I, which interpretation is provided by computation C, and that therefore, the system in fact realizes C. Which simply doesn’t follow, as the most you can conclude from this is that P realizes C if P realizes C.
And with that, I’m pleased to announce that we’ve spanned the distance from your long vacation to mine! I probably won’t be posting here much (if at all) for the next couple of weeks, so feel free to carry on with this thread without me!
I’m pleased to hear that. You seem like you need a good, long vacation, so go have some fun and relax a little! You should try to keep this off your mind for a while, but should you return with new arguments, I’ll be eager to hear them.
Or declare victory and drop the mike, if you wish […]
Peace out! <mike drop>
Well, that wouldn’t really have much of an effect now, would it?
Sorry, I forgot to address this earlier.

I suspect you will respond that this is just behaviorism which is generally considered an insufficient explanation which leads to this question: can you summarize the weaknesses with behaviorism?
Yes, I do think that that just leads to behaviorism, but more importantly, to hold that the sort of thing you outlined is supposed to delineate what a computation is means that the notion of computation simply adds nothing, makes no progress on the problem of explaining the mind.
A part of this problem is that the mind evidently, in some way, imbues its contents with meaning, even when these meanings are abstract concepts. We have a concept of a ‘square root’, which isn’t reducible to the symbols we use to denote it. So, we must be able to connect whatever the brain does to these abstract symbols.
The problem then is that there’s no obvious way in which neuron spiking frequencies connect to square roots. So one proposal is that they realize an appropriate computation: computations, after all, also seem to connect to things like square roots. We don’t take ourselves to be saying anything extraordinary at all when we say, I’ve computed the square root of 313 using this calculator. Then, you’d have a chain that connects neuron spiking frequencies with square roots: the spiking performs a computation, and the square roots are what’s being computed. That, if it’s possible, would be significant progress.
But a notion of computation that just reduces things again to neuron spiking frequencies—or lamps lighting up, or robots showing a certain behavior—gives away that hope of progress.
As for behaviorism in general, the locus classicus of its critique is generally considered to be Noam Chomsky’s “A Review of B. F. Skinner’s Verbal Behavior”.

Maybe you would call my slightly extended box example physicalism?
The problem with this kind of physicalism is the notion of multiple realizability. If, say, pain is just identical to a certain pattern of neuron spikings, then only creatures with neurons, for instance, could have pain in this sense. So we need a notion that pain must not be identical to them, but in some way be realized by them, in order to be able to have the same pain be realized in a different physical substrate (say, silicon chips). So that’s why it was proposed that pain isn’t identical to the physical behavior, but is, instead, identical to the computation (or more generally, the function) that this behavior realizes. Since the same computation can be realized by different physical substrates, then, the same pain can be felt by organisms with different physical makeup (lucky them, I guess). But then, we need to face the problems I’ve outlined.
HMHW
You said
“A part of this problem is that the mind evidently, in some way, imbues its contents with meaning, even when these meanings are abstract concepts. We have a concept of a ‘square root’, which isn’t reducible to the symbols we use to denote it. So, we must be able to connect whatever the brain does to these abstract symbols.”
That was very clear and helped me to iron out this discussion a bit more in my “head”. I’ve been trying to keep up with the discussion, from the start, but its a big climb, I had to start with how gates are built first, I’m still working through that.
Do you have any thoughts on how the mind might do this (imbue contents with meaning?)
I noticed up thread that you had mentioned a paper you were writing on a more naturalistic explanation of the problem outlined above?

A part of this problem is that the mind evidently, in some way, imbues its contents with meaning, even when these meanings are abstract concepts. We have a concept of a ‘square root’, which isn’t reducible to the symbols we use to denote it. So, we must be able to connect whatever the brain does to these abstract symbols.
The problem then is that there’s no obvious way in which neuron spiking frequencies connect to square roots.
Seems to me that they have not looked for those neurons because total math heads are not commonly available in research, but simplifying things a little:
https://www.nature.com/news/2010/101027/full/news.2010.568.html
People have used mind control to change images on a video screen, a study reports. The volunteers, whose brains were wired up to a computer, enhanced one of two competing images of famous people or objects by changing firing rates in individual brain cells.
The research, by Moran Cerf from the California Institute of Technology in Pasadena and his colleagues, demonstrates how our brains, which are constantly bombarded with images, noise and smells, can, through conscious thought, select what stimuli to notice and what to ignore

So one proposal is that they realize an appropriate computation: computations, after all, also seem to connect to things like square roots. We don’t take ourselves to be saying anything extraordinary at all when we say, I’ve computed the square root of 313 using this calculator. Then, you’d have a chain that connects neuron spiking frequencies with square roots: the spiking performs a computation, and the square roots are what’s being computed. That, if it’s possible, would be significant progress.
One thing that sounds a lot like pseudoscience from your part here is to deny any progress described in research like one cited here.
https://www.nature.com/news/2010/101027/full/news.2010.568.html
Neuroscientists have collaborated with Fried for many years, exploiting the waiting time of patients to do simple experiments to probe how the human mind works while listening in to the recording from the electrodes. Thanks to improved data analysis, they can now extract from noisy electrical background the firing of single neurons.
In the last six years or so they have shown that single neurons can fire when subjects recognise — or even imagine — just one particular person or object (see ‘Neuroscience: Opening up brain surgery’). They propose that activity in these neurons reflect the choices the brain is making about what sensory information it will consider further and what information it will neglect.

But a notion of computation that just reduces things again to neuron spiking frequencies—or lamps lighting up, or robots showing a certain behavior—gives away that hope of progress.
As mentioned before, you do have a peculiar notion of what is progress and what it is not.

That was very clear and helped me to iron out this discussion a bit more in my “head”. I’ve been trying to keep up with the discussion, from the start, but its a big climb, I had to start with how gates are built first, I’m still working through that.
I’m happy to hear that my efforts have been of some use to somebody. If you have any questions, I’ll try to give an answer, if I can.
Do you have any thoughts on how the mind might do this (imbue contents with meaning?)
I noticed up thread that you had mentioned a paper you were writing on a more naturalistic explanation of the problem outlined above?
I have published a paper on that, yes, but unfortunately, if I share some details, that would be quite easily googleable—and I’m not sure I want to give up my anonymity on here like that.

Seems to me that they have not looked for those neurons because total math heads are not commonly available in research, but simplifying things a little:
I have no idea what you think those links you regurgitate in my general direction have to do with what I’ve been arguing. If you feel you have a point to make, why not just try and make it?
That a certain set of neurons (which may be a single neuron, though the ‘grandmother neuron’ thesis has its detractors) fires when we think of a square root is an obvious triviality; the difficult part is how that connects to the notion of ‘square root’, and the hard part is how we come to have any phenomenal experience of that.
Likewise, it’s no wonder that neuron spikings can be used to control a computer—I’m doing it right now! Of course, I’m doing it via motor neurons, muscle contractions, key presses and the like, but that’s not any different in principle from doing it via a wire.

I have no idea what you think those links you regurgitate in my general direction have to do with what I’ve been arguing. If you feel you have a point to make, why not just try and make it?
That a certain set of neurons (which may be a single neuron, though the ‘grandmother neuron’ thesis has its detractors) fires when we think of a square root is an obvious triviality; the difficult part is how that connects to the notion of ‘square root’, and the hard part is how we come to have any phenomenal experience of that.
This was not addressed to me but I’m going to stick my oar in here and make a brief comment anyway. I note that the second link – how it connects to the notion of a square root – is a reference to intentionality, and the third link is a reference to Chalmers’ “hard problem” claim. As essentially an account of the distinction between mental and physical phenomena, intentionality is in some sense a variant of metaphysical discussions about the nature of mind that have been going on for thousands of years, mostly not very productively.
As your link states right at the beginning, “To say of an individual’s mental states that they have intentionality is to say that they are mental representations or that they have contents”. Indeed, and it strikes me as incredibly ironic that you think this somehow supports your argument against the computational theory of mind – my particular hot button here – when in fact intentionality is precisely what theorists like Fodor and indeed the mainstream of cognitive science attempts to address through computational paradigms. A successful CTM at its core is one which plausibly shows that mental objects are in fact representations with semantic properties. That is precisely what Fodor’s representational theory of mind does, RTM being merely his specific version of CTM.
Thus,
Fodor developed two theories that have been particularly influential across disciplinary boundaries. He defended a “Representational Theory of Mind,” according to which thinking is a computational process defined over mental representations that are physically realized in the brain. On Fodor’s view, these mental representations are internally structured much like sentences in a natural language, in that they have both a syntax and a compositional semantics.
https://www.iep.utm.edu/fodor/
As for the man being confused, or just plain wrong, or whatever it is you think he is in his views that are so diametrically opposed to yours, from the same link, it’s a tall mountain to climb:
Jerry Fodor was one of the most important philosophers of mind of the late twentieth and early twenty-first centuries. In addition to exerting an enormous influence on virtually all parts of the literature in the philosophy of mind since 1960, Fodor’s work had a significant impact on the development of the cognitive sciences.
As for Chalmers’ “hard problem” claim, this is a claim that “the problem of experience will persist even when the performance of all the relevant functions is explained” which is at best controversial, and at worst, as some AI researchers note in their dismissal of a useful notion of consciousness, meaningless.

I have no idea what you think those links you regurgitate in my general direction have to do with what I’ve been arguing. If you feel you have a point to make, why not just try and make it?
Well, this is evidence that you are not paying attention; I made the point several times already, your argument is mostly irrelevant, and reading about why that is so, I have to agree with most people involved in the matter that see that the argument is also a flawed appeal to intuition nowadays.
However, that was not the immediate reason for the reply:

That a certain set of neurons (which may be a single neuron, though the ‘grandmother neuron’ thesis has its detractors) fires when we think of a square root is an obvious triviality; the difficult part is how that connects to the notion of ‘square root’, and the hard part is how we come to have any phenomenal experience of that.
Likewise, it’s no wonder that neuron spikings can be used to control a computer—I’m doing it right now! Of course, I’m doing it via motor neurons, muscle contractions, key presses and the like, but that’s not any different in principle from doing it via a wire.
And missing the point, the reply that you call regurgitated was only made to point out just one of many premises that you are working with that are wrong, in this case it turns out that there is progress shown in the way neuroscientists are finding that we are beginning to see beyond the noise and to be able to identify what is being stored and where in the brain. And there are experiments that show the ways the brain binds the stored or the information coming from disparate sensory sources, as Hawkins and others are reporting. It does not seem impossible to be able in the future to check how neurons connect to things like square roots. Or how the brain connects a concept or person like Chaplin to Einstein, unlike Marylin Monroe. **
- Why, do you think I may be a Chinese room?
a few people before accused me of being a bot and suffice to say, they are even more irrelevant nowadays than your argument.
** Despite rumors, there was no evidence that Marylin Monroe ever met Einstein, and there are many that think that she did so. Many don’t know that Einstein did meet Chaplin.

As your link states right at the beginning, “To say of an individual’s mental states that they have intentionality is to say that they are mental representations or that they have contents”. Indeed, and it strikes me as incredibly ironic that you think this somehow supports your argument against the computational theory of mind – my particular hot button here – when in fact intentionality is precisely what theorists like Fodor and indeed the mainstream of cognitive science attempts to address through computational paradigms.
You misunderstand my intent. I was merely trying to point out to GIGObuster that his links are aimed at a different problem than those that make explaining the mind the unique task that it is—basically, the search for the neural correlates of consciousness. The research he presents constitutes progress on that level, but few (if any) would accept that it makes any appreciable process on either intentionality or phenomenal experience.
I actually agree with your claim that the computational theory seems to be just the sort of thing that would be needed in order to make sense of intentionality. I explained why in a response to RaftPeople earlier on, but right now, I’m just getting gateway timeouts, and can’t link to it.
As for Chalmers’ “hard problem” claim, this is a claim that “the problem of experience will persist even when the performance of all the relevant functions is explained” which is at best controversial, and at worst, as some AI researchers note in their dismissal of a useful notion of consciousness, meaningless.
Still, it’s clear that there is something there to be explained: either, actual phenomenal experience; or, should that not be a valid notion, how we come to think it is. Many think that the latter should be simpler, but I think that’s actually quite dubious—even the capacity of being deceived about something seems to require just those mental capabilities that eliminativism seeks to get rid off. But this is really a different kind of discussion from the present one.

And missing the point, the reply that you call regurgitated was only made to point out just one of many premises that you are working with that are wrong, in this case it turns out that there is progress shown in the way neuroscientists are finding that we are beginning to see beyond the noise and to be able to identify what is being stored and where in the brain.
What is being stored in the brain isn’t square roots, though. If mental activity has representational character, then neuron spiking patterns stand to concepts in the same relation as words stand to their meaning, or numerals stand to the numbers they represent. The number five is different from the symbols ‘five’, ‘V’, ‘5’ or ‘101’; otherwise, we could not say that these all mean the same thing. That is, we can’t simply conflate between the symbols and their meanings; the studies you cite, however, merely relate to certain symbols being tokened. They don’t explain (and don’t pretend to explain) *how *these symbols—neuron spiking patterns—actually relate to the concepts they represent, they merely present evidence that they do.
But the former is the question relevant to the thread, the latter, as linked to above, ultimately merely concerns the so-called neural correlates of consciousness—which must clearly be there on any naturalistic theory, but whose identification doesn’t directly tell us about their connection to things like meaning or experience.