As I already pointed out, and as any good introduction on the subject will stress, Turing machines are an abstract mathematical model. As such, it manipulates abstract objects, like e. g. Boolean truth values; thus, a TM implementing a mapping from binary strings to binary strings will directly implement the function associated with that.
As wikipedia puts it right in the first sentence:
[
Following the links, we learn that a mathematical model of computation is:
[
](Model of computation - Wikipedia)
And likewise, an abstract machine is:
[
](Abstract machine - Wikipedia)
Consequently, the Turing machines used to define computation are theoretical abstractions that compute mathematical functions. Such an abstraction has to be distinguished from its concrete realization. Actually trying to build a Turing machine will leave you with a physical system that does not directly connect to anything abstract; rather, you will have to interpret it properly. The argument I’ve given can then be exactly repeated on the level of the TM’s machine table to show that the same physical system can be interpreted to implement different abstract Turing machines.
This isn’t different, for example, from the case of an abstract, Boolean gate, versus its concrete realization, say by means of an electronic circuit. The abstract Boolean AND-gate is defined by its truth table, which is given in terms of Boolean truth values; if both inputs are ‘1’ or ‘true’, then so is the output. But any concrete physical realization will have physical states, such as voltage values, at its in- and outputs. Computation, however, is not done over voltage values, but (in this case) over Boolean truth values. Thus, these voltage values have to be interpreted as Boolean truth values. The fact that this interpretation is never unique then means that the same physical system can be considered to implement different abstract gates.
This doesn’t entail that the Boolean gate, or equivalently, a given TM, does not uniquely implement a computation, but merely that the abstract machine is not uniquely associated to a given physical system.
Note that I also said that one might further relax this. But of course, for my argument, it doesn’t matter whether it’s ten, ten thousand, or infinitely many computations that one could associate to a given physical system—the salient point is still that it’s not unique.
Sure. Equivalently, you could interpret the system as not just computing x[sub]1[/sub] + x[sub]2[/sub], but also, x[sub]1[/sub] + x[sub]2[/sub] + 1, x[sub]1[/sub] + x[sub]2[/sub] + 2, and so on. But in my experience, people will resist the notion that these are different computations, since they’re essentially completely isomorphic, unlike my f and f’. Hence, I prefer to use the more restrictive notion, in order to guard against such rejoinders.
Yes, but it doesn’t do anything but that. If that’s what’s meant by computation, then it either collapses to identity theory, or at most to logical behaviorism (if you’re happy to apply the same table to qualitatively different systems). Either case would be fatal to the computational theory of mind, and in fact, both approaches are generally thought to be untenable.
So basically, the argument must be wrong because it entails a conclusion you don’t like.
There are multiple tables you could associate to the box—in fact, roughly 2.8 * 10[sup]14[/sup]: I can interpret each switch, and each lamp, separately, so while S[sub]11[/sub] being ‘up’ means ‘1’, S[sub]12[/sub] being ‘up’ means ‘0’, for example. There is no reason different switches or lamps need to be interpreted in the same way. If this offends your intuition, simply consider the inputs and outputs to be realized differently—say, the inputs are a switch, a lever, a knob, and a button, while the outputs are a light that’s either green or yellow, a light that’s either blue or red, and a light that’s either orange or purple. That ‘orange’ means ‘1’ does not entail anything about whether ‘blue’ means ‘1’ or ‘0’, and the position of the lever does not fix the meaning of a knob.
The point is now that each such table corresponds to a different computable function over the alphabet of binary strings, and each has an equally valid claim to be implemented by the box.
Of course, every table is a different interpretation of the box. But the point is that these tables are the computations the system performs; thus, if the table differs, what’s being computed differs. If there is no uniquely right table, then there’s no uniquely right computation. Else, we’re back at the silliness that just claims that the behavior of the box is the computation—which, as noted, simply trivializes computationalism.
Again, the point is the following. I can clearly use the box to compute f, and f is a bona fide computation. What happens when I use it to compute f? Either, I do something computational to single out that one interpretation: then, a box ought to be possible that only computes f. Or, I don’t: then, computationalism is wrong.
Hence, the challenge is exactly on point: anybody that claims there is a unique fact of the matter regarding which computation a system performs can only substantiate that claim by showing that there is some system such that it computes f uniquely.
I can’t make heads or tails of this. Either, you’re claiming that f and f’ aren’t really computations; then, you’re just not using ‘computation’ to mean what it does in the context of computer science. Or, you’re claiming that there’s no fact of the matter regarding what my box computes; but then, you’re conceding my point.
The relationship between a piece of paper (or some other set of symbols) and a given object (whether physical machine or abstract concept) is exactly what I mean by ‘interpretation’, so I’m not sure what you’re getting at here.
The point remains that which table you associate to the box is arbitrary; the rules for the mapping aren’t something that’s fixed by the box itself. Furthermore, different tables correspond to different computations. Consequently, which computation we consider the system to perform depends on an arbitrary choice. That’s exactly what I’ve been pointing out, and suffices to throw computationalism over board.
Then I guess that must be why the wikipedia article on Dreyfus’ criticism of AI has an entire section on how much of what he has said has later been vindicated by the development of AI.
That’s a nearly totally vacuous statement. Anything reacts to specific inputs with responses that depend on that input—that’s just causality. A pebble, upon being kicked, will react by performing a parabolic arc whose parameters exactly depend on those of the kick. So this does not capture the notion of interpretation in the least.
This is truly disheartening. How does a computer do ‘that thing’? By executing a computation? If so, then, as you seem to agree that interpretation is necessary to perform a computation, that computer needs to first be interpreted in the right way in order to be able to implement the computation that decodes the symbols (which are not themselves abstract, by the way, but have abstract objects as their meaning).
But no, you’re completely oblivious to this, and just continue making this claim without even so much as a token attempt to justify it. This is truly bizarre; on the one hand, you recognize the need for interpretation in order to postulate a computer to do the interpretation, on the other, you are completely oblivious to the fact that if there’s a need for interpretation, then that further computer needs to be interpreted as well. It’s like a Christian explaining the origin of the universe as ‘God did it’, and then just sort of hoping that nobody will notice that this just kicks the question up a rung, to the origin of God.
So if the origin of a computation is a further computation, then what’s the origin of that computation?
If the first one needs to be interpreted, then why not the second one?
Because that’s not possible: no symbol ever just allows for a single interpretation. An unadorned and unlabeled black circle can be interpreted in any way whatsoever—as a zero, a representation of zen awakening, hell, even as the complete works of Shakespeare—all you need is a suitable code, that is, a table taking symbols to their meaning (by which I mean, already understood symbols).
As noted, the fact that brains can interpret things just means they’re not computational. Although, perhaps your brain is, and that’s why all you’re generating is meaningless symbols?
I have been clear from the beginning that a necessary prerequisite for my argument to apply is for there to be a dependence on interpretation in that system to which it applies. In fact, it’s in the very first sentence I ever posted to this thread:
I’ve even bolded the relevant part for your convenience. It’s only because computation involves the interpretation of symbolic vessels, of physical states of a system, as abstract objects, that my argument applies. Nothing which does not involve such interpretation—such as the process of digestion, or, of course, conscious experience—is within the scope of my argument, and your claim otherwise does nothing but demonstrate that you’ve still somehow managed to not grasp the argument’s core point.
This is just false. My argument shows that, in order to implement a computation, a physical system needs to be interpreted in a certain way. That the interpretation can’t be done computationally is then an immediate consequence. For suppose that it could. Then, there exists a computation C such that C yields the interpretation of the former system as performing a certain computation. But C must itself be implemented in some physical system P. However, in order to be implemented in P, P must be interpreted as implementing C. Thus, nothing has been won—the origin of the computation has just been kicked up the ladder one rung.
So no. I very explicitly do not assume that computers can’t do interpretation; I show that, if they could, we enter a vicious regress.
Again, the very simple way for you to show this to be wrong is just to post a single example of a computation interpreting another. You’ve tried that, and failed; and since, you’ve given up trying, just repeating the same ill-conceived notions again and again.
I’ve been very explicit about how what you claim I assume is, in fact, derived. So really, this charge just doesn’t stick, no matter how often you repeat it. If you’ve still got questions about the argument, you’re welcome to ask.