No matter if you want to call it ‘definition’ or ‘interpretation’—what matters is that an arbitrary choice must be made as to which physical signals correspond to what logical states. And the arbitrariness means that a different choice can be made with equal justification.
Of course—if you already have intelligent entities to begin with. This is exactly what makes it impossible to have an intelligent entity whose existence depends on interpretation—or definition, if you will: the intelligent entity needs to pre-exist to come to an agreement of how to interpret certain signs—voltages, letters, you name it. Hence, if we’re trying to create an intelligent entity, we can’t state with something that depends on interpretation—as computation unavoidably does.
That’s right: a painting is a physical thing, and as such, objective. What computation a physical system implements is not objective.
Any computation can be understood as a map from binary strings to binary strings; any such map can be implemented using Boolean logic. But which map is being implemented by a given set of gates depends on how you interpret the signals—the symbols—that are being manipulated: if you interpret low voltage (whatever you choose that to be—again, this is a level of interpretation I’m giving you for free, and still the problem remains unsolvable!) as 1, and high voltage as 0, then my above system is an OR-gate; if you interpret things the other way around, it’s an AND-gate.
I readily concede that you have the greater expertise when it comes to things like microchip design and testing, simulation, and programming and computer architecture in general. That’s in fact what makes these conceptual points so hard for you to grasp: if you’ve been a fish all your life, you have a harder time noticing water than if you’ve just jumped in and noticed the splashing. You’re used to things wearing their interpretation on their sleeve, because that’s how they’re designed, with the human user in mind; but the interpretation is just as arbitrary, and if one abstracts away from the human user, it’s completely clear that computation is not fixed by the physical system used to perform it.
Exactly. But between whom is this agreement if the computation is supposed to instantiate a mind?
That doesn’t mean that it can’t be changed in principle—and it’s this in principle malleability that’s all the argument rests upon. There is an intended interpretation, sure; but one, for there to be an intended interpretation, you need pre-existing intelligent beings to intend that interpretation, and two, that doesn’t mean you can’t change the interpretation afterwards.
But it doesn’t interpret—it doesn’t take a physical system to compute, say, the value of pi. This is, however, what’s needed to make a physical system compute a certain function.
Certainly not, which is the point I’m trying to make: there is no physical difference whether a system implements one computation, or another. You can interpret it as having performed computation A; you can equally well interpret it as having implemented computation B. I can use the system I outlined to compute the AND of its inputs; you can use it to implement the OR. Both of us will get the correct result for the function we chose.
That’s nice, but why is 0 volts a logical 0, and 5 volts a logical 1? Only because of an arbitrary choice that was made. Without such a choice, all that’s there is either 0 or 5 volts.
What you transport is akin to a blueprint; what you get out is what happens if something is assembled according to that blueprint. The assembled thing is not identical to the blueprint—you can’t live in an architect’s design.
If the computational theory of mind is right, then what your subconscious is doing is itself just a computation; but that needs itself interpretation in order to be any particular computation at all. Hence, it can’t be the interpreting entity, since it itself would need to be interpreted—which is just the homunculus regress.
Also, a feedback loop doesn’t help—computation A can’t interpret computation B as computation B, while computation B interprets computation A as computation A. It’s still possible to imbue a different interpretation, making computation A into computation C, and computation B into computation D—all you need to do is, say, flip your interpretation of high voltage from being logical 1 to logical 0.
A gut digests without computation; a stone rolls down a hill without computation; planets orbit the sun without computation. A consciousness is just as much a physical process as any of those; and all computation ever is, is a model of such physical processes, which is as different from the processes themselves as an architect’s design is from a house.
And yet, a Turing machine’s operation, and hence, any computation, is fully characterized by syntax. Thus, no semantics can ever come from a computation.
If you interpret it the right way—which means that it’s you who imbues this thing with semantics. On a different interpretation—flipping 1s and 0s—the program won’t be doing anything remotely related to ‘understanding a story’.
Yes, that’s exactly right! I’m using the principle of charitable interpretation, which roughly says that you should always interpret somebody’s arguments in their strongest way—otherwise, you weaken your own position, since you may end up attacking a weaker argument than the one being made. You’re doing the opposite of that—constructing a nonsensical straw-man argument to push over with great aplomb.
No. I am saying that every Turing universal system can compute every computable function.
Which is, of course, just another interpretation of the physical processes the real machine performs.
It’s sufficient to illustrate that a Boolean network with n inputs can be interpreted as computing any function of n variables—which is all I said.
I can just as well interpret the system I described as implementing a NAND gate.