When we abstract away from everything except an ordering relation, then yes, all that’s left is an ordering relation. That’s true enough.
I actually wouldn’t say I know how much detail is necessary. As you’re well aware, it’s inherent in the nature of chaotic systems that small differences in initial conditions can have enormous implications.
I’m happy to presume for the sake of argument that the level of detail necessary is not on the quantum field/string level.
Yes, the most primitive structures can be instantiated in many different ways.
Sure, but as soon as we’re talking about “my interpretation”, we’ve stopped talking about the isomorphism in terms of its own workings. I (outside that structure) am using that structure, in whatever manifestation is most convenient, in order to accomplish some task. “My interpretation” of that structure is another layer of complexity on top of it, which does not belong to the structure itself, and is ultimately irrelevant to the structure itself. “My interpretation, plus the structure” is another higher level of complexity outside the structure.
This is, I believe, the same analogy you used in the previous thread.
I don’t see even the slightest bit of relevance to my “interpretation” of the program.
It does what it does. It is indifferent to how I’m interpreting it. I can use the program in this way, or that way. I can “interpret” the same program in multiple ways, but that doesn’t make it a different program. An alien species can write UNIVERSE.EXE in order to calculate the esoteric problem of flrfnar they’re working on. In that program, I happen to exist, and I play a game of basketball. But it’s useful for their problem. A different species can write esotericPROBLEM.EXE to solve the problem of grgmax they’re working on. Ah. It just happens to be the same program. Then inside it, I happen to exist, and I play a game of basketball. Someone who knows my existence in sufficient detail can see the isomorphism between me here and now, and the computational processes of me inside that program when they run it. They can see “this particular computational process happens to be in that program”. That’s true even if they don’t interpret “me” in the same way I “interpret” myself.
And my interpretation, of myself? Primitive. Insufficient. Inaccurate. An extremely fuzzy map, not a perfect isomorphism.
It’s very true that the two alien species are interpreting their program differently. It just doesn’t matter to me as I play the game of basketball. I am aware that the lights of an algorithmic machine can be interpreted by people outside the machine in different ways, according to different uses.
What I am not aware of is why the lights should care.
“Who’s got it right?”
What does that even mean?
It does what it does. If there’s a machine that puts a rock into a bucket every time a sheep walks through the gate into the meadow, and takes a rock out of the bucket every time the sheep leaves through the gate, and I’m just looking at the machine itself without benefit of the meadow, then I’m probably going to guess wrong about the original use of the machine because I lack the relevant archeological evidence. But still. If I have a use for it in adding and subtracting dollars in my account, then I’d be happy to use it for that purpose. If there’s a whole universe somewhere in the device, where entire civilizations are wiped out every time I make a bank withdrawal, then accountancy would perhaps be a suboptimal use. But to encompass a universe would normally require a mite more computational power than rocks going into a bucket whenever a gate swings.
That’s not what you’d usually call a computer—a computer is, generally, something formally specified. Rather, you have a control system, which implements a certain physical operation—which you then want to call a computation.
You can say that it ‘computes’ how much money to leave, or something like that, but then, say, a mechanical scale ‘computes’ the weight of something placed upon it—the number it shows is just a mechanical, causal consequence of the pressure exerted.
Generally, when we say that something computes, we have something different in mind. Take a calculator with an old-fashioned seven-segment display. What it shows is particular patterns of light—on a notion of computation such as the one you’re proposing, it would ‘compute’ light-patterns. But that’s not what we take it to do; rather, we take it to, say, add numbers. Implement a certain abstract function. It’s how that abstract function, that formal object corresponds to the physical system that’s the question.
You’re essentially saying, there’s no formal object at all, stuff just follows its physical evolution, and sometimes we call that computation (as in your device), and sometimes we don’t (as in, say, a ball falling to the ground). I think this misses our usual notion of computation by a fair margin.
That’s an option. You can say, for instance, that the computation my example device carries out isn’t all in the device, but is in the device + its user, which then forms a new, higher level device. But then you’re also saying that, if the device + its user implements some function f, whereas the device just implements some kind of ‘precursor’ computation on its own, there exists some way to implement f exclusively. But then, the problem just iterates: because I can interpret every device purporting to implement f as implementing any of the other functions I propose.
Likewise, you might want to say that yes, each computation needs to be interpreted to correspond to a definite computable function, but that computation itself occurs computably. But then, you’re left having to specify how the interpretation is itself interpreted—as we had surmised every computation needs an interpreter. This leads you into infinite regress—or bottoming out at something non-computational.
Then how do we ever compute the sum of two numbers? No environment contains numbers. This sort of view reduces computation to be just a different name for the mere physical evolution of a system.
I don’t think it’s even possible in principal because it doesn’t have any state. It would have to have a series of canned responses from beginning of its creation until my most recent question, and every possible branch, which may literally be infinite. For example, how would it answer:
“Now what do you have?”
It would depend on whether I previously said:
Me: “You have ten apples.”
Machine: “OK”
Me: “I take away five.”
Or, whether I said:
Me: “Last week, you said you have a cold.”
Machine: “Yes, but my symptoms are different now.”
OK, I’m happy to ignore that part above, but I have no idea what the dictionary example has to do with any of this at all. To answer your question…maybe? Linguists probably know that certain kinds of words will show up more frequently in any human language and could conceivable begin to crack the code that way.
Every structure can be. Again, this isn’t something just I am saying—it’s a famous issue in every version of structural realism.
Yes, and it’s a level that isn’t itself reducible to structure—or otherwise, you’re left with a regress of further levels of structure.
Again, this then reduces computation to be just a fancy name for ‘physical evolution’. It leaves completely mysterious how anybody ever computed the sum of two numbers. When we use a calculator, we don’t take it to compute patterns of lights; we use it to compute sums. On your account, that ability is left entirely mysterious.
Furthermore, that there is a you playing basketball inside the computation is, of course, begging the question—as I’m asserting that, without interpretation, there’s no computation at all. So there wouldn’t be a you playing basketball; there’d be, for instance, voltages. You’re not voltages—although voltages may be structurally isomorphic to you; in precisely the same way that your maternal ancestors aren’t the books on your shelf, even though they may be structurally isomorphic. More structure doesn’t help—in fact, the problem becomes compounded with the addition of more structure.
But that’s exactly what I’m saying. It does what it does, and we interpret it as is convenient to us. But if we can do that, then interpretation, itself, can’t be the same kind of thing—because then, it’d itself be open to interpretation, and hence, lead to regress.
Again, this is question begging: you assume there’s a fact of the matter regarding what’s really computed, when that’s the thesis under discussion. But what happens without interpretation is just that—a rock goes into a bucket whenever a gate is swinging; you can use it to count sheep, or dollars, or maybe even simulate civilizations, with a complex enough interpretation; but absent interpretation, none of these things are there.
Most people who have been through some sort of STEM education have built a device of the sort I describe, and called it a ‘binary adder’. That is, they took it to compute a certain function of numbers, namely, addition. But that interpretation is arbitrary—as the example shows, you can supply all manner of different functions that one can claim equally well are implemented by the device.
So, either, addition is in some way preferred: I don’t see how.
Or, the device implements all of these computations: that, transferred to more complex physical objects, entails that almost everything implements almost every computation, and if computation is sufficient for minds, then you’re far more likely to be a puddle in the sun imagining itself of having a discussion on the internet, than you are to be the sort of thing you imagine yourself to be.
Or, the device implements none of these computations: then it becomes wholly mysterious by why we’d ever called it an ‘adder’, or how anybody ever actually adds anything.
Or, what it computes depends on how it’s interpreted. You can use it to compute one function, at the same time while I use it to compute another. We can both do that in the only way that matters: if we don’t know the answer to a certain computation, we can put it to the device, and obtain it, in just the way we use computers every day.
I favor the latter approach: it lets us keep the notion of computation in the way we typically use it, and does not lead to all manner of ghosts living in every thing around us. It entails that interpretation, itself, can’t be computational, but must be grounded in something that isn’t—but then, so what. There’s absolutely no reason at all to suppose that everything has to be computational; so if giving up on that is the price I pay for being able to say, ‘my calculator adds numbers’, and having said something meaningful, then well, so be it.
Plus (although we haven’t gotten to that part yet), there’s all manner of other nice consequences from that point of view—such as, an explanation for the ‘hardness’ of the hard problem without having to resort to dualism, or panpsychism, or any other dubious notions. We just have to realize that our explanations are, ultimately, computational, and hence, what’s not computational—such as our faculty of interpretation—is opaque to this explanatory faculty. That’s a right bargain if you ask me!
For any finite amount of time, it would be finite. And you couldn’t ever catch it in a gotcha, because the input—the left hand side of the lookup table—is always the entire conversation up to this point. Large, OK; but demonstrably finite, and logically possible (for that matter, there wouldn’t be a logical issue with an infinite table).
Because the dictionary is a set of relations—relations between words—which will not serve to ever fix the meanings of the words (and such a thing would be cryptographically secure—that is, there’d be no way to decipher it). The same is true for the relations between inputs and outputs of a device: they don’t serve to fix the function they implement. So either, inputs and outputs are all there is—which leaves you with the curious consequence that there actually isn’t any computation at all—or there must be something beyond these relations to ground them.
I disagree, because it would have to be “programmed” with an infinite number of possible inputs and outputs before we begin having a conversation with it. I agree that the transcript of inputs and outputs after the fact will be finite, but it has to be ready for an infinite number of possible inputs.
I think I don’t understand your definition of “computation”. If some future device could take a flash image of my brain and duplicate all the neuron connections and sensitivities, so that its responses to its environment is the same as mine, we’re both conscious. Or, we’re neither conscious. (That machine and I would quickly diverge, but we would both be similarly (non)conscious. Is there computation in my brain? In that machine?
(Ah, okay, we have a difference in word usage. I’ll be more clear about referring to the formal system or the physical system.)
I agree that formal systems are not real except as mechanisms humans use to evaluate the environment.
So, I think that means I agree with you. The formal system that defines a computation also has to define its inputs and outputs. When a formal system is implemented as a physical system, the mappings between the inputs, the computations, and the outputs of the formal system to the corresponding physical states of the mechanism have to be defined. We call those mappings interpretations.
A formal system cannot define its own mappings to a physical system, because the mappings are choices made by the user of the physical system. (By “user”, I mean whatever is applying the mappings of the formal system to the physical system.) That is, interpretations are not computable within the formal system being interpreted.
Is this just another version of Gödel’s incompleteness theorem? It has been decades since I read Gödel, Escher, Bach. I understood lots of it, and didn’t understand some of it. My main takeaway was that the most frequently used letters in English are ETAOINSHRDLU (unless that’s from another Hofstaedter book?).
I don’t think a conversation look-up table needs have infinite inputs. While we can construct a formal grammar of English that can generate an infinite number of sentences, no natural speaker of English has ever spoken an infinitely long sentence. We can thus exclude sentences longer than some large finite length without losing equivalence to a natural speaker.
I say there is no computation in a human brain (or any other physical object), only physical evolution. But, we can find mappings between formal systems that do computation and the physical evolution of a brain.
It might help if you gave us your complete definition of computation, as it sounds more restrictive than the ones I’m familiar with. For example, a slime mold can solve a complex travelling salesman problem. It starts off with no structure to speak of - just a single-celled blob. And yet, when you put down a map of Japanese cities, with the cities denoted by food, the slime mold will create an exact map of the Japanese rail system, fully optimized.
Clearly to me at least, this was an act of computation. Prices in a market are computed through the interactions of participants. The patterns on the shell of a conch are the result of computation. A neural network identifying a pattern called ‘cat’ is an act of computation, even if the people observing it have no idea what a cat is or what the network is doing.
Computation just requires an algorithm to operate on inputs and present outputs. The slime mold computes the travelling salesman problem by glomming over everything, then retracting itself until the minimum amount of itself is connected to all the food sources. It’s all algorithm, no initial structure. If you are a pancomputationalist, you’d say that everything is computed. Evolution is an ongoing process of computation. Emergent properties of complex systems are computed.
It seems hard to get from there to the notion that subjective experiences are not the result of computation. My perspective is that computation is everywhere. Our existence is the result of billions of years of recursive computing. Our bodies work because of billions of acts of computation happening inside us every second. The coronavirus - and our immune response to it - are the result of computation. Computation is all we are.
I haven’t read your paper yet, or even this whole thread, but the ideas remind me of those in this essay by Hans Moravec.
One of the key themes is that interpretation (or selection, or perception) is almost everything. The interpretation maps the external states (electrical signals, chemical gradients, etc.) to internal ones (a simulation of a world, a mind, etc.). There’s a small trouble: there’s an infinite supply of these interpretations. I can map those external signals to any sequence of internal events, just as a library that contains every possible sequence of letters (“AAA…A”, “AAA…B”, etc.) contains every book every written. And so selection plays a part as well–is that somehow part of the nature of consciousness?
Wit - thanks for the article. I read slow so it took a while. I lack your level of terminology but you made it decipherable. I am an engineer with a background in analog computers (F100 flight simulator) and digital computers IBM704/5 to present.
The Antikythera device is an example of a computer being sent to an ignorant society,
Coffee cup computer - chemists have been doing it for years with beakers, graduates and balance scales.
I feel that there are some dimensions missing from the diagrams. All systems are bounded by performance parameters and their architecture. While it’s true that a Turing machine, given enough time , can perform any computation. But a Turing machine, being one bit, is infinitely flexible. An IBM705 was a variable word length processor that could also have accepted any computation. But the architectures of single chip calculators were fixed. Sure, there is a class of computational tasks that can be mapped to the calculator architecture but it is tiny snippet of the computational universe. So, the idea that all computational tasks can be mapped to all computational devices needs to be qualified by size, speed, technology and architecture.
I benefited from your discussion of modeling. I believe we agree that consciousness will not be achieved by adding machines. But, before I raise a couple of questions could you give me some definitions. Is ‘computation’ only numeric processing or do we include analog and biological systems? What do you mean by ‘substrate’. Is that the architecture and technology of the adding machine?
In popular culture, perhaps. Technically, a computer is anything that computes. A slime mold computes. Honey bees compute. People compute. So if you are defining computer as “an electronic device specifically programmed to output a certain understood value”, then you’ve simply determined that understanding the output is required for computation by definition. The problem is that not everyone agrees with that definition.
How is a control system not a computer? The only thing unique to a control system is that the output of its computation is fed back into the inputs as negative feedback. A control system is literally an algorithm that takes inputs and creates outputs based on that algorithm. I don’t know how you can have a control system without computing.
Of course. A mechanical scale is an analog computer. It uses springs and weights and lever arms and such to create an output from the input. That it is deterministic and simple hardly matters. A digital adder is, to people who understand electronics, just as simple.
Or look at it another way: Let’s say I may a scale that takes two different weights, and returns the sum or difference of them. I can do that too with springs and weights. Give me enough such materials, and I can build you a universal computer based on nothing but such hardware combined together. At what point in the complexity does it stop becoming a noncomputing device and turns into a computing one?
Simple example: I need an algorithm that allows me to feed two numbers into a computer. The first number is the comparator, and the second is a different value. The output will be a 1 if the second number is larger than the first, or a zero if it is smaller. Would you agree that this is a computer? We have two inputs, an algorithm, and an output.
Now, I choose to build the computer with a balance scale. The two numbers are represented by small weights. I pile weight equivalent to the comparator on one side, and weight equivalent to the test value on the other. the indicator on the balance beam then points to a ‘1’ if it goes ine way, and a ‘0’ if it goes the other.
If the first example implemented electronically is a computer, how is the second one not? Hell, if we want to add programmability to the mix I can allow the spring rates and order to be adjusted to give you all kinds of interesting outputs.
This is a very narrow definition of ‘computing’, which seems to have been selected to require interpretation. It becomes a tautology: Computing is something we intentionally do to determine a certain kind of result, and therefore if we can’t interpret what a computer is doing, or there are multiple mappings of its outputs to plausible functions, it can’t be a computer because we don’t know which function it’s computing. Have I got that right?
So for your alien computer example, where we can determine inputs and outputs, but have no idea what they mean. You say this means no computation is going on. But computation is a physical process. It happens whether we understand it or not. If we can’t make sense of the inouts and outputs, it just means we lack the knowledge to understand them, not that the computation isn’t happening.
Let’s say you built a computer that will take two numbers, multiply the first one by, say, Avogadro’s number, then multiply the result by a relativistic adjustment based on the second number, and give you a result. Would you say that is a computation? I will assume so. Now we send the box to a land of people who have not discovered relativity or Avogadro’s number. They have no chance of understanding what the numbers mean. Did the box stop computing? It’s still doing what it always did. It’s still behaving by the rules of its algorithm. It’s still consuming energy as information theory predicts. What has changed?
No: again, the conversation is finite, so every possible input string will be finite (bounded by a certain length), and the set of all bounded input strings is finite.
The problem is that computation isn’t a ‘duplication’ of something, but merely a model, and hence, only has the structural properties in common with the object. But those alone don’t suffice.
The relation between the computer and the computation is the same as between the books on my shelf and the set of my maternal ancestors (see my answer above): one forms a model of the other, because it supports the same structure (a linear ordering relation). But still, these things obviously fail to be duplicates of one another. In order to use the books on my shelf to draw conclusions about my ancestors, I have to interpret them appropriately.
Likewise, a computation, in the most common realization, is just a pattern of voltages. These are then interpreted as whatever it is that’s being computed—an interpretation guided by seductive contact surfaces, but an interpretation nonetheless. That this is possible is due to the equivalence of structure; but, since structure underdetermines its domain, this also means that multiple other interpretations are possible.
So to make this concrete, I introduce my example with Alice, Bob, and Charlie, and the ways they interpret the box Alice created. Computations, formally, are functions describable via Turing machines, the lambda calculus, recursive functions, and the like; these are all equivalent, and the Church-Turing thesis holds that they exhaust the notion of computation. Moreover, we can, without loss of generality, suppose that a computable function is of the form f(n[sub]1[/sub], n[sub]2[/sub],…) = n, that is, taking tuples of natural numbers to an individual natural number output. Everything a computer ever does can be brought into this form.
Contrariwise, every object instantiating any of these functions is a computer. So take the function f[sub]A[/sub](n[sub]1[/sub], n[sub]2[/sub]) = n[sub]1[/sub] + n[sub]2[/sub], colloquially known as ‘addition’ (for simplicity, but again without loss of generality, we restrict the inputs to be between 0 and 4).
The example then consists of a box with four switches, and three lamps, which Alice constructs in such a way that each set of two switches yield an input number, and the three lights light up so as to provide an output. To Alice, ‘switch up’ means ‘1’, ‘switch down’ means ‘0’, ‘light on’ means ‘1’, and ‘light off’ means ‘0’ again. She constructs the box such that, under this interpretation, switches and lamps map to numbers in binary so as to realize f[sub]A[/sub]. This, one generally presumes, means that she’s constructed a binary adder—a simple device which computes a particular function, namely, that of addition.
Importantly, what Alice does with her box is in no way different as what we do with a calculator, when we presume it computes sums of numbers: we take certain switches, labeled in some way, to represent certain numbers, and take certain lamp lighting patterns to likewise represent numbers. That, we feel, is sufficient to decide that the device computes. So, computation in my example is exactly the same as the everyday notion of computation.
But then, in comes Bob. He studies the device, and decides that it implements a certain function taking tuples of natural numbers to individual natural number outputs—in other words, the device computes: it instantiates an element of the set of computable functions. However, where on Alice’s interpretation, f[sub]A[/sub](1, 2) = 3, on Bob’s, f[sub]B[/sub](1, 2) = 4. That is, Bob holds that the device instantiates a very different computation than Alice does. How does he come to that conclusion? Well, Bob interprets ‘switch up/down’ as ‘0/1’, and ‘light on/off’ as ‘0/1’—inverting Alice’s assignment. (If that’s too trivial for you, Charlie holds to yet a different interpretation, changing the order of significance of the bits.)
But if Alice uses the adder to compute her function in just the same way as we use ‘computation’ in every day life, then Bob does, too; but then, what a given device computes is not inherent to the device, but depends on our interpretation of it.
Why can we do this? Well, because the different functions share the same structure: in both cases, for instance, f(ud, du) = xoo, where ‘u’ means ‘switch up’, ‘d’ means ‘switch down’, ‘x’ means ‘light off’, and ‘o’ means ‘light on’. But wait, can’t we then just say that that’s the real computation being implemented?
We could—but only at the cost of trivializing the notion of computation entirely. Because that is really just an attachment of labels to the physical states of the system; then, everything would compute, and what it computes would just be its own physical evolution. That is, ‘computation’ would just be a strange name for ‘physical evolution’, and add nothing whatever.
Moreover, with this move, how we actually ever compute something like a sum, or anything else, becomes profoundly mysterious. There are no numbers within any physical system’s evolution; that’s just a category error. And supposing that there’s just some additional part of the computation instantiated within each of Alice’s and Bob’s heads runs into the same problem: we could imagine extracting that whatever it is, and augmenting the box with it; but then, it still would be a ‘box’ of some kind, instantiating a certain structure, to be interpreted in different ways—we wouldn’t have gained anything.
Besides, our own computing systems—our calculators—don’t generally come with anything more than buttons and lights, and we take them to instantiate functions over numbers perfectly well.
But then, how does the business of interpretation work? Well, that’s where the appeal to intrinsic properties comes in. They’re the bearers of structure—the relata grounding the structural relations. That’s schematically represented by figure 1 of my paper: the flower, an element of the external world, impresses its structure—analogous to states like (ud, du, xoo) of the device—, represented as a ‘paint by numbers’-outline, upon our senses (the pattern of neuron excitations caused by the flower will have some correspondence with the flower itself, that is, stand in a structural correspondence to them). This is calls up some association—triggers other neural excitations corresponding, perhaps, to memories or affective states (emotional responses), represented by the face in outline.
Then, the intrinsic properties basically ‘color in’ the resulting picture. In figure 2, then, a different choice of intrinsic properties—a different association of colors—realizes the structure in a different way. This is how Alice’s and Bob’s interpretations differ: they are both presented with the same structural information, but ‘fill in’ this structure in a different way (this ‘filling in’ can be taken quite literally: essentially, they supply different values for x in equation 3). So in one case, we get an internal model, an experience of ‘switch up’ as ‘1’, while in the other, we obtain a different such model, experiencing ‘switch up’ as meaning ‘0’. In this way, different computations are instantiated, using the same physical mechanism.
Yes, that’s the first rung of the ladder. Now, if we suppose that the interpretation were computed within a higher-order formal system (essentially perfectly possible!), then we’d be stuck with having to interpret some physical system as implementing that higher-order formal system, and so on; or, we need something non-computational for this regress to bottom out.
There is, in fact, an analogy; I remarked on this in my response to RaftPeople. The point was noted first, I think, by William Seager in his (quite brilliant) Theories of Consciousness: basically, in mathematical logic, each set of axioms corresponds to what I call ‘structure’, and a ‘model’ then is some concrete instantiation of that structure. Think, as an example, of the natural numbers and the Peano axioms.
Gödel incompleteness then means that the axioms, themselves, don’t suffice to pin down the model—there will be models in which the Gödel sentence comes out true, and models in which it comes out false. Hence, the structure underdetermines its domain, as in the Newman objection. The choice of different intrinsic properties (see above) then essentially corresponds to choosing one model over the other—or one model, rather than the other, being realized in nature, so to speak.
This then allows one to defend materialism against the charges levied against it by thought experiments like Chalmers’ p-zombies, inverted spectra (‘How do I know that my red does not look like your green?’), or the knowledge argument. Because if the structure underdetermines its domain, and all we have access to (in our theories) is that structure, then there is no theory such that we can derive from it what red ‘looks like’ to you, or even that there are experiential facts at all (zombies). Hence, the ‘gap’ between mind and body is revealed to be merely one of understanding, analogous to the gab between formal system and model, and does not signal any sort of breakdown of physicalism/materialism, or any need for dualist theories.
This, to me, is the most attractive feature of my model, and I’m happy to give up the idea that everything must be computation for it.
I do give it in the paper, but yes, I hadn’t so far given it in this thread. But since I’m lazy, let me just quote myself from above:
This is a completely conventional notion of computation, I would think, but if you have any issue with it, let me know.
Yes, this is a good example of the sort of ambiguity I mean. Of course, you need to interpret the map to decide that it is a map of Japanese cities; every map, as above, needs a legend. Without this interpretation, the slime mold hasn’t computed anything such as an optimal route, or whatever, at all.
You might be familiar with the notion of a reduction in computer science. Many problems can be reduced to that the slime mold just solved; saying that it has solved one of them to the exclusion of the others is then an act of interpretation—and without that interpretation, it hasn’t performed any computation at all.
In the above formalism, the traveling salesman problem can be coded into the input parameters, which each number representing one of the nodes, and the function then being such that it is equal to 1 if the order is that corresponding to the shortest route, and 0 otherwise. Is it now obvious that the slime mold has computed anything with any relation to Japan at all? No, of course not; it has done so only under a specific interpretation—other interpretations are always possible.
But the question which algorithm is implemented is exactly what’s open to interpretation—see my above answer to RitterSport, where I go over my example in somewhat more detail.
[quote=“Sam_Stone, post:34, topic:850402”]
In popular culture, perhaps. Technically, a computer is anything that computes. A slime mold computes. Honey bees compute. People compute. So if you are defining computer as “an electronic device specifically programmed to output a certain understood value”, then you’ve simply determined that understanding the output is required for computation by definition. The problem is that not everyone agrees with that definition.
[QUOTE]
Again, my definition is a perfectly ordinary one (see above).
Would you say that a ball rolling down a hill carries out a computation? That it ‘computes’ the lowest value of potential energy? Because then, all you’re doing is taking what everybody else calls ‘physics’ and call it ‘computation’, and the notion becomes trivial.
So, what does it compute? Its needle may point to a number, but that number must be interpreted—there’s nothing about the symbol ‘11’ that makes it indicate the number ‘eleven’ anymore than there is about ‘XI’; each could be interpreted totally differently. The numerical value attached to the symbol is mere convention. So if you’re saying that the weight is computed, then you’re opening yourself up to there being infinitely many different weights being computed.
So then you (might) say that well, there’s nothing related to number at all. There’s just the physics, and I want to call that physics ‘computation’. Then fine, you can do so; but then, you’re kinda stuck when it comes to having to explain how we use a calculator to add numbers; on your notion, it would merely compute lighting patterns on a screen.
Never; it becomes a computing device by being interpreted in the right way. Computation is not a notion inherent to any device; it’s something devices are used to do.
How do you input numbers? By pressing labeled buttons? But then, how about somebody who speaks a different language, where these labels have a different meaning? Say, the symbol ‘5’ means the number seven, in their writing system. So where’s the fact of the matter that you have put in numbers of certain relative sizes? It’s entirely in the conventions you use for what the symbols mean. Another observer could come, on exactly the same justification as yours, to the conclusion that the device outputs (the number, not the symbol) ‘one’ if the second number is smaller than the first.
Do you think they’re wrong? Or do you agree that they are just as entitled to their interpretation as you are to yours?
Both are computers in just the same way: if they are interpreted the right way.
This was just an example, not intended as a definition: here is an object we commonly claim computes a specific function; yet, we can just as well—on the same justification—claim it computes an entirely different one.
No. Computation is defined by the interpretation we attach to the physical states a system traverses. This interpretation yields abstract objects—numbers, sets, graphs, what have you—from physical properties. This interpretation is not uniquely defined by the system itself; hence, different interpretations of the same system are possible.
Perhaps take a look at the abstraction/representation-account as introduced in section 2.4. Its crucial element is the representation relation R[sub]T[/sub], connecting each physical state to an abstract representation, according to the theory T (which corresponds, roughly, to my notion of structure—in fact, in a Ramsey sentence approach, each theory is given by a certain structure). To instantiate a computation, we start with an abstract object, and encode that in a physical state, which needs the inverse of R[sub]T[/sub], see the diagram on the page afterwards. Letting the system evolve physically according to its own dynamics then induces a corresponding formal evolution—a computation, an algorithm.
Now, I think it’s plain that different theories yield different computations. I give the concrete example in the two following diagrams—Alice’s and Bob’s theories, and the different representations, hence different computations, they lead to.
Do you agree that if one accepts the R/A-account, then computation becomes a relative notion? In the first place, there needs to be some entity there capable of forming theories, abstract descriptions, of formal systems. Hence, computation isn’t intrinsic to any physical system, but a matter of how that system is regarded. Or do you disagree?
If you agree, then I’d propose, for the time being, to just accept the R/A-account (which is, after all, a perfectly reasonable notion of implementation that’s received discussion in various professional venues), and see where that leads us.
But if computation is a physical process, then there must be some physical difference between two systems, whether they implement computation A or computation B. Otherwise, this whole thing already collapses to a dualist ontology, since the physical facts wouldn’t suffice to fix all the facts about the world—whether computation A or computation B is implemented would then be an extra-physical notion.
So then, there must be some physical test I could carry out to decide which computation is being implemented; some experiment that tells me what’s actually being computed. In consequence, the problem of deciding what the alien computer computes would be solvable. But how could such an experiment look like? Do you have any suggestion?
Again, the problem is that you already assign an interpretation to the box, and presume that this is somehow the objectively right one. But, even a land of people who have discovered relativity and statistical physics would not stand any more of a chance in discovering which algorithm you arbitrarily designated to be ‘the right one’. Just take the box from my example—Alice uses it as an adder, and both Bob and Charlie, being perfectly well versed in the requisite electronics and binary arithmetic, interpret it as implementing completely different functions. Who’s right?
Thanks for pointing that out. I believe I’ve read the essay, but it was a long time ago, and I’m hazy on whether I agreed with everything in it. Perhaps I’ll have a re-read with the current dearth of alternative recreational offerings…
Indeed, selection does play a role in my model—in fact, in the sense of ‘natural selection’. A state of mind, as I conceive of it, is essentially an evolved thing, selected to ‘fit’ the sensory data coming from the external world. If you’ve read some of the other answers, I’ve written about an analogy to how a set of axioms underdetermines the model that realizing them—how there’s multiple inequivalent models possible for the Peano axioms. This is sort of where the evolutionary selection comes in.
There’s a larger point about the information content of subsets to be made here, though. A totality—like the library of Babel—can contain a vanishing amount of information, while individual elements can contain a great deal; so in a weird way, if you split something in two, you may end up with parts, each of which contains a greater amount of information than the whole they were part of. This is something I touch upon in the essay on Buddhist philosophy, neural networks, and dual-process psychology I mentioned above.
If I understand you correctly, I think I agree—that’s sort of the point I was trying to make with the coffee cup. To me, it’s structure that limits what computations can be associated to a given system; in a way, universal computers are then a kind of universal modeling clay, or Lego, which can be made to model anything that’s modelable at all. This is why I speak of ‘limited’ pancomputationalism, as opposed to the more drastic forms of Putnam and so on. But you’re also right that this neglects issues of computational complexity.
Computation is wholly substrate-independent (that’s, in fact, its greatest virtue, and in part responsible for its popularity in the philosophy of mind: a straight-up identity theory physicalism, where something like ‘pain’ just is a certain kind of nervous activity, runs into the problem that one would intuitively expect that whether something feels pain should not depend on whether it has the same kind of nerves as we do). As for a definition of computation, I will again be lazy here and quote myself:
This is what I take to be a reasonable summary of the way computation is usually understood. There’s an issue here, I suppose, with analog systems—if we have systems that actually make use of arbitrarily finely specified differences, then in principle, we could build systems exceeding Turing machines in capability (sometimes called ‘real computers’, because of the real number-precision they are able to exploit). But under the plausible assumption that there’s a limit to experimental distinguishability of real numbers, at least in practice, such devices will be equivalent to Turing machines.
Thank you for the long and detailed reply. I’m not sure I follow all of it, but I did follow this part and I think it reveals a hole in your interpretation idea.
Would that kind of dual interpretation work for some non-linear function? I haven’t checked, but intuitively, it seems like it couldn’t work. I don’t see how a computation that outputs X^2 could be confused with one that outputs X^3, for example.
And, in any case, your example doesn’t refute the idea that what’s going on in Alice and Bob’s head is computation anyway. From what I understand, neurons are, at their core, signal adders – they get one signal in and when that reaches a certain threshold, they start firing out.
It works in just the same way as the other example. So, say you have a device that possesses three switches, and six lamps. The input-output mapping then would be:
S | L
-----------------
ddd | xxxxxx
ddu | xxxxxo
dud | xxxoxx
duu | xxoxxo
udd | xoxxxx
udu | xooxxo
uud | oxxoxx
uuu | ooxxxo
Again, ‘u/d’ means ‘switch up/down’, ‘o/x’ means ‘light on/off’. Now, you want to claim that this instantiates the following computation:
For this, you stipulate that ‘switch up/down’ means ‘1/0’, and ‘light on/off’ means likewise ‘1/0’. Under this interpretation, the device is a binary squarer.
But it should be clear, by now, that I can just go and use a different interpretation. Say, I interpret ‘switch down’ to mean ‘1’, and ‘switch up’ to mean ‘0’. The above table then becomes:
That’s a perfectly well-defined, distinct computation. Or I could also flip the interpretation of the lamps, too, so that an input of 0 would yield an output of 14. Or, I could invert the significance of the bits, such that something like ‘oxxoxx’, read from right to left, signifies the number 9. Or I could even consider the first two switches to be a two-bit number, and the third one to signify a single bit second number, with then a row like ‘udu | xooxxo’ describing the operation f(2,1) = 25.
There are, as you see, all manner of interpretations possible—perhaps not a (straightforward) one where the device implements the operation f(x) = x[sup]3[/sup], but it was never my contention that every computation could be implemented by any device, only that every device can be interpreted as implementing multiple computations.
Sure, that’s not what it’s intended to do. That bit comes afterwards: if what a device computes is down to how it is interpreted, then it follows that in order to compute, Alice’s and Bob’s brains must, likewise, be interpreted. If that interpretation is then itself supposed to be due to computation, we are off to infinite regress—see the diagram in section 2.6—since we would need an interpretation to fix that interpretation, another to fix that further one, and so on. Hence, to bottom out at all, interpretation must bottom out in something non-computational.
Sure. But that doesn’t make them computational. Streams of water converging in a basin to eventually let it overflow don’t compute whether the water influx exceeds a certain threshold; that’s just what physically happens. And if you claim that that’s computation, then computation is just what stuff physically does, and the notion becomes trivialized.