You can define sound as either the vibrations in the air, or the “interpretation” of the vibrations by a mind connected to an ear drum.
I have never seen the purpose of the “interpretation” definition here. I’m not saying it’s wrong. People can define words how they wish. But it is absolutely and totally wrong to say that the “interpretation” definition is the only possible one. That’s just flatly acceptable. To avoid technicalities, I’m going to say here that computation is what computers do. No interpretation necessary.
That could be refined, but I doubt refinement is necessary at this stage. This definition being different from yours does not make it either “wrong” or “reductive” or “fancy” or whatever other label you want to throw on it.
I didn’t write the OP.
If I wanted to convince people of a computational theory of mind – which I’m going to gloss here as “If you copied me with sufficient fidelity into a functioning computer, then it would still be me in the program, regardless of whether anyone was ‘interpreting’ it” – then my posts would be roughly 100 times as long as they are now. I’m not assuming my conclusion in an argument. I’m stating my conclusion, without giving the argument, because it’s not my job to give an argument.
And I’m not trying to convince anyone of anything. I think it’s perfectly reasonable for people to reject this if they can’t see a good argument for it (and I’m not providing any argument at all, let alone a good one).
All I’m doing here is pointing out that your argument has no persuasive power for people like me. And that seems to begin with the half-definition/half-assertion that a running computer program with no one interpreting it is somehow not “computation”.
I didn’t say I was voltages.
I said I’m the computational structure of those voltages, regardless of whether the computation was being “interpreted”. But if you want to assert outright “You’re not the computational structure of those voltages”, then you need an argument for that which starts somewhere other than positing as an axiom that interpretation is necessary for all this.
I believe I’m the computational structure of those voltages, or of anything else that’s chugging along with the same structure.
“No, you aren’t” isn’t going to work as an argument against that idea that has any persuasive power. “You aren’t voltages” is not going to cut it.
The purpose is to understand how a computer—a physical system—relates to numbers (sets, graphs)—abstract objects. How, in other words, your calculator calculates sums, rather than merely spitting out patterns of lights.
‘Fleurgling is what fleurglers do’. Do you now know what fleurgling is?
I don’t see how it’s a definition at all—you already need to know which systems are computers, in order to know what computation is. But that’s just the same problem, stated the other way around. Is a stone rolling down a hill a computer? Is an orrery? Sam Stone’s slime mold? And if either of these is, then what is it that makes them into computers?
This was in response to:
So, do you think there’s a way, physically, to tell whether, for any given device, there’s a you in there playing basketball? If so, then how do you tell, physically, whether my box implements addition, or another program?
I’m neither defining nor asserting; I’ve given an example, in my paper, of how a certain device can, on exactly equivalent grounds, be used to compute different functions. That’s section 2.2 (I can reproduce it here, but I don’t think this would do much more than to blow up the length of my contributions even more).
Again, I’m not posing an axiom; I’m giving an example where exactly that happens—the same structure is interpreted as different computations.
Definitions can be practical rather than formal. I don’t necessarily have to define green as a certain wavelength of color. I can point at a green thing and say “That’s green” when there is common perceptual apparatus at work. I’m pointing at a machine – which, in point of fact, you absolutely do know the basic workings of – and I’m saying that it does computations. That’s not a formal definition, no. Rather it’s a way of establishing that there is an area of common knowledge here, and then skipping that part because it’s tedious and unnecessary.
You’re saying that doesn’t work for you. Well.
I’m not going to get into the theoretical idea of Turing machines, then the physical apparatus of different types of computers or potential computers (vacuum tube, transistor, neuron, slime mold, Chinese Room), and then after all that is hashed out, suddenly think that sound must necessarily be defined to have an ear drum, rather than be defined as vibrations in the air. I think there is ~0% chance that we would end up in a place different form where we are standing right now if I went down that path. Not worth it. It’s basically the same reason I don’t want to start an OP myself on the computational theory of mind. Too much effort. (Well, that and also that I’m not totally convinced of it. I just lean that way.)
I don’t blame you if you don’t want to put in the effort, either. Some chasms aren’t worth the effort in crossing.
You look at the network of neurons and their configurations, their firings, at whatever necessary level of physical detail (which I don’t know but more on this below), then you look at the network of voltage changes and see whether there’s an exact correspondence in one physical pattern to the other physical pattern, including especially the evolution of that pattern.
Do you have some physical idea in mind for “implements addition”? Is there some Platonic physical form for “implements addition”?
Or is that an interpretation of what a physical machine is doing?
I have some physical idea in mind for how this body works. Not with any degree of precision, but pretty sure this thing is physical. And I think if a machine in this world mimics those processes with sufficient precision, then that machine has me “inside it”, so to speak. The machine is, or contains, another version of me. But in that case, you’re looking at one physical pattern and seeing a perfectly matching physical pattern someplace else. I say this is “computational” because a computer seems like the easiest way to implement another machine that plays out my physical pattern. By computer, I mean the sort of device you’re using right now. I don’t, at present, see the purpose of any more rigorous definition. You know what one of these things is.
What’s the original physical pattern of “implements addition” that we’re trying to emulate here? What would that even mean?
I don’t know why you repeat this. I’ve never denied this.
I even gave my own hypothetical example of the same thing, with different alien worlds writing the same program and interpreting it different ways. I have never once denied this can happen, so why the repeated insistence that this can happen? What I don’t bother with is calling it different “programs” just because there are multiple functions the same box can be used to calculate. If I use a hammer to build a chair, or instead build a table, I don’t use a different name for it based on what I’m using it to build. If a box can be profitably used for multiple functions, then why call each different function a different “program” when it’s the same box undergoing the same physical processes? I continue to see no purpose in that.
You’re posing as an axiom that the interpretation matters.
You seem confused by what computation could possibly be, if it’s not the interpretation. This strikes me as being confused by what sound could possibly be, if a tree falls in an empty forest with no one to hear. The vibrations in the air are still there, even if no ear drums are around. Sound can be defined in one way, or the other. Now, I’m not interested right now in working out the exact definitions of sound waves in the air, nor the physical components of a computer. But I can point to history. At a certain point, historically, there was no devices which could perform computations at anywhere near the speed that might make them suitable for creating an environment for advanced cognition. Then suddenly, devices of appropriate speed came into existence.
Maybe if I worked more directly in computation, I’d see the advantage of your definitions. But I don’t, and I don’t. And so this supposed necessity of interpretation continues to strike me as strictly superfluous.
One reason the topic of this thread is important is if we want to try to create consciousness, especially with computers.
To solve that problem, you need to know what are the critical elements that would get us there. Some think that if we make the computation to be a very specific computation, then we can create consciousness, we just need to figure out what that very specific computation is.
The counter to that is the topic of this thread, that any mapping from input to output can have multiple interpretations. For example, let’s say I work for 20 years on something that I think is the consciousness program and proclaim I’ve succeeded. I send my program out to others to have them confirm it.
Tester #1 fires up the program and links the output to a speaker as I’ve instructed, and he confirms my belief that I’ve succeeded.
Tester #2 didn’t read the manual and links the output to his monitor and sends me the following email:
“I’ve started your program which appears to be an exact copy of Windows 10, fully functional but with the Live Tiles disabled (thankfully)…but I don’t see where your consciousness program is???”
If we can’t “force” our running program to be the consciousness program (vs the Windows 10 program) then either consciousness is a trivial state found in many many programs (that we would never consider to be conscious), or it relies on something other than the program alone.
But what if there’s not an area of common knowledge? You point at something and say, that’s a computer. I point at it and say, no, it’s a fleurgler. Do we just agree to disagree?
You claim the brain computes the mind. But if all you can say about computation is that ‘you know it when you see it’, you haven’t really made any contentful claim at all; you’re saying there’s this thing, called computation, which you just sorta know what it is, but can’t tell me how you know, and that’s what provides the grounding for consciousness.
I did put in that effort; I wrote a rather lengthy paper about it.
Except that the example shows that this isn’t going to suffice. Alice, Bob, and Charlie look at the box in whatever level of physical detail, and find a correspondence to a certain computation—but they disagree what that computation is. So, spinning that further, Alice might consider the device to include you playing basketball (with ‘you playing basketball’ being analogous to ‘computing the sum of two inputs’), while Bob doesn’t—perhaps instead concluding the device to be structurally isomorphic to you playing chess. So either, the device implements both these computations at the same time—in which case, we’re faced with rampant pancomputationalism, and all of our beliefs about what sort of creatures we are are going to come out false with overwhelming likelihood—or, there’s no fact of the matter regarding what computation is implemented absent an interpretation being imposed.
No. Addition is a function on natural numbers; these need not have any independent reality (at least, not if you believe computation is just an interpretational matter; things actually get more complicated if you want to claim that physical stuff has some inherent connection to abstract quantities, as anybody who wishes to argue that my calculator calculates, period, will have to).
Yes. Sure. Lots of problems go away if we just don’t consider them in depth. The world is entirely Newtonian, provided we don’t insist looking at it at high enough magnification. Newtonian mechanics works well enough for all practical purposes, why would we need any of that quantum stuff? You know what a rock is, it has a perfectly defined trajectory, and that’s that.
The thing is that we routinely believe we’re computing the sum of two numbers. Do you, or don’t you think that when you push the button labeled ‘5’, followed by ‘+’, followed by ‘6’, followed by ‘=’ on your calculator, to get out a pattern of lights forming ‘11’, you’ve calculated that the sum of five and six is eleven? If you don’t, do you at least agree that pretty much everybody else does?
The problem is that you’re claiming that there’s a definite fact of the matter regarding whether that computation has you playing basketball inside of it. But that’s exactly the sort of question that will come out relative to interpretation, if you take the lesson from my example seriously. Under one interpretation, the device I propose computes the sum of two numbers; under another, it computes Bob’s function. Under one interpretation, a device might compute you playing basketball; under another, it may compute you skipping across a lush, green meadow.
Perhaps we need to come at this from the other end. Take the abstraction/representation account of computation (section 2.4). What a system computes, there, depends on the theory T used to furnish the representation relation R[sub]T[/sub]. Different theories will yield different computations. Do you agree that, if one accepts the A/R account, that’s what it leads to?
Well, because it’s the way we use ‘program’ or ‘computation’ in everyday life. We say that a calculator computes sums; then, in the same sense of ‘computes’, the box computes sums, but also, Bob’s or Charlie’s function.
Again, no, I’m not. I’m saying that (a) a computation is given by a function from n-tuples of natural numbers to individual natural numbers that’s representable via a Turing machine (or lambda calculus, or recursive functions…), and that (b) different users can use the same device to implement different members of the set of such functions, to conclude that (c) different users can use the same physical system to perform different computations, which hence means that (d) what computation a system performs is not an objective property of that system, but is in some way due to how the user uses it. That way is merely what I’m calling ‘interpretation’; if you somehow take offence with the wording, feel free to suggest an alternative.
Now, (a) is just a standard way of elaborating the notion of ‘computation’ I take as a given, (b) is what’s borne out by my example; (c) and (d) then follow immediately. Hence, I don’t assume anything about interpretation; interpretation is just what I’m calling whatever process happens within a device’s user to realize those different functions. I use the name because of the analogy (equivalence, actually) of interpreting different signs as having different meanings—a German will interpret ‘gift’ as something dangerous, an American as something welcome, for example.
I don’t see where. I’ve given a concrete, and quite standard, definition of what computation is; if you take issue with that, I’d appreciate you clarifying where.
For that example to work, you need to spin it further. Say, something makes a certain noise—as in, emits a certain frequency. When that frequency impinges on my eardrum, I hear it as middle c; you, however, hear an f sharp. Which one of us is right? There’s the same physical substrate, the same vibration, but different tones to be heard. I claim there’s no fact of the matter regarding who is right: what tone is heard, if this were the setup, is then simply not a property of the vibrational frequency alone.
You, however, claim that (in some way) there are definite things to be said about the tone. No matter how we hear it, you say, there must be some objective matter of fact to it—somebody either plays basketball inside of the noise, or not, so to speak. But there’s simply no reason to believe this. The tone is, in this setup, just a subjective interpretation of a certain vibrational frequency. And, of course, it won’t do to just say, well, then that frequency is just all there’s to it, then; because in that case, how we hear any tone at all is just left open to mystery (how our calculators calculate any sums at all, is left a mystery).
Now, of course, you could just say that well, some wiring of our brains must be different to make us both hear different tones. That’s where the analogy breaks down, as nobody is claiming that our minds are ‘vibrational’, or made of noises. If there were a claim that we have to hear our thoughts in order to have them, the analogy would work—analogous to the claim that whatever our minds do, must be computational. But as what minds do isn’t made of noises, there’s no problem in you and me hearing the same noise differently. Similarly, if what minds do isn’t made of computations, there also is no problem in you and me considering the same device to implement different computations.
Then tell me, how does your calculator compute the sum of two numbers? How is what it displays on its screen related to a number, rather than just being a pattern of lights? This is where the interpretation comes in—in order for your calculator to yield the value ‘eleven’, you must interpret the pattern ‘11’ as signifying, standing for, that number. Without that interpretation, there simply is no computation—in the sense of, as above, function from natural number-tuples to natural numbers—whatever being implemented by the calculator. That solves the problem of computation—but only by effectively declaring that there ain’t no such thing.
(An unimportant tangent (because it doesn’t really change the essence of what you just said), but fun and interesting stuff)
It’s a lot more complex than that, and new discoveries pretty regularly, some examples:
Dendrites perform nonlinear filtering on the incoming signals, with local spikes forward and backward before forwarding the signal to the soma. Some axons also have localized spiking, sometimes with backwards signals.
Glial cells surround and manage the synapse, detecting and transmitting both neural and glial transmitters.
Glial cells themselves have intra (localized/compartmentalized) and inter calcium wave spikes, and there is some research looking into the possibility that glial cells are using reservoir computing/liquid state machine techniques for feature extraction from the signals of the thousands of neurons each cell covers/monitors/surrounds/connects to.
Reservoir computing is an interesting technique that has significant efficiency advantages over something like a deep neural network because the the training only happens on a single layer (the output).
I rather think that if we could create a non-human consciousness, a large number of human beings would deny it regardless of the strength of the arguments for its consciousness.
And you’re putting in yet more effort here, with this thread.
I don’t know what other people believe on this topic.
If I had a computer program that I fired up for Bob’s function, and my neighbor double-clicked the exact same app on the desktop to use it for Charlie’s function, do you believe my neighbor would say he was using a different program? Seriously? Do you believe anybody would say that? We say a calculator calculates because there’s overwhelming contextual information for what we use it for. If I used it to hammer a nail, I’d probably still call it a calculator. You’re pointing out that that desktop app (or any program) can be interpreted for different purposes, but your linguistic claim here is that when it’s used for a different purpose, it’s a different “program” when my neighbor clicks on it.
As a matter of plain language, people do not speak that way.
I’ll need more careful reading to continue any further. That won’t be soon.
The problem of Alice, et al, was addressed by DeMorgan. It is common for engineers to use AND and OR circuits interchangeably (-OR = +And).
It saves on inverters.
The distinctions you have made regarding computation are interesting. I resisted the idea at first but came to agree that the slime mold and analog computer are not involved in computation. Consider the slime mold and the analog flight simulator.:
The simulation takes off and flies out a few miles. At that point the illusion of flight is created by the phase relationships of a gillion 400 cps sine waves. The instantaneous state of the physical model is reached experientially. There was no program. The pilot is just another servo. The position of the servos and resolvers create a map of an airplane and its environment, just as the slime mold maps the cities of Japan. Both processes are experiential and in both cases a complete physical model exists. Even though the flight simulator is an analog computer, no numerical calculations have been performed.
Computer programming is an exercise in logistics not engineering. The programmer organizes a list of tasks and the computer will tirelessly plod through them. The computer is a zombie that has borrowed a brain from the programmer. So, when a digital flight simulator takes off and flies a few miles it creates the same illusion of flight but no physical model. The servos and slime are replaced by bits in the zombie brain. Only computation has occurred. No maps exists.
So, as a beginning concept, consciousness could only occur in the analog computer or the slime mold but not in the adding machine because the adding machine does not create a map that could be interpreted by a consciousness component.
I say this and then maybe I immediately figure out what you’re driving at.
Let’s say I get a super-duper scan machine, and I scan myself down to sufficiently precise level using my own personal patented coding mechanism, and throw the information into the old x86. And I look at the result, and it’s me playing basketball. Or at least, I think so.
But my neighbor actually took a scan of me yesterday, using his own machine and his own coding mechanism, and he runs the program and it was me playing soccer yesterday. Yet in a remarkable coincidence, the data file that he created, from his own proprietary coding procedure, and its subsequent evolution of 1’s and 0’s in the computer system is exactly the same as mine. Yet when he runs it, BOOM, football yesterday on his machine.
And hey wait, a cousin of mine took a scan the day before yesterday (badminton), using her own proprietary coding mechanism with her scan machine, and when she “interprets” what she’s looking it, it’s me flailing wildly and missing the birdie repeatedly. Yet in a coincidence that this universe has never seen before, and will never see again, the data file her scan created is the same as the other two, as is the evolution of that system as the program runs, but she’s looking at me playing badminton day before yesterday.
Is there a normal direct link to this paper or a copy on the arxiv? That academia.edu link for some reason asks me to log into Google.
So I don’t yet have much to contribute, except to state that when I want to talk about a formal abstract computation, a typical thing I might do is to start out by defining a set of elementary operations, which will typically already include addition and multiplication. (The numbers at this stage may be genuine real numbers with infinite precision and everything; how one deals with that is another topic.) So there is no philosophical issue since you by definition know when a particular result appears. You can then prove theorems about the power of your computational techniques. If you do not do this, the whole exercise becomes mathematically trivial, since from that point of view there is nothing to do except merely “interpret” the inputs when you want to know what the answer is. (ETA of course for physical systems we need to get into how organized or disorganized they are and related physics which quickly forms a separate discipline.) Consciousness is extra problematic with this mathematical view because we cannot (at least definitely not straightforwardly) single out a function that needs to be formally computed in order for consciousness to occur, though it is pretty clear what people mean when they try to reverse-engineer the human brain and come up with formal properties that neurons etc. should satisfy. But about that, I would not 100% vouch for any human being truly conscious, in a sufficiently strict sense of the word. Which explains why some computerized neural-network voodoo is very slowly but increasingly able to create “realistic” appearing text, music, etc.- in a certain sense the bar is pretty low when all you need to do is imitate what some extant neural networks are already doing.
I want to cut this down to the basics, and see if I’m understanding the argument.
You are saying that a computer is something that runs an algorithm for a known reason. That without knowing the purpose of a computation, it’s not a computation? Or perhaps you don’t need to know the output, but you need to understand what the algorithm is and what it’s trying to do and the context it’s doing it in before you can say that ‘computing’ is happening?
The example of an alien box that was clearly designed to do something, but since we don’t understand its purpose and try as we might we can’t understand how the inputs relate to the output, then it’s fair to say that, at least for us the device is not ‘computing’, but if the aliens came back then for them it would be computing, because whether something is computing or not depends on the context? In other words, the process of ‘computing’ exists only in the context of understanding what it is computing? Is that a fair statement?
If so, I completely disagree. Computing is a physical act. It exists or not, independent of who is looking at it. ‘Computing’ is a fundamental aspect of information theory and complexity theory. Complex systems can be thought of as computers. In fact, one of the solutions to Maxwell’s Demon can be found by calculating how many bits of information the demon would require to know when to open its little door, and how much energy that would take. If you do that, you find that the demon can’t violate the laws of thermodynamics after all. But this formulation treats ‘computing’ as a very basic thing - almost a fundamental property. Decision-making by anything processing data against a procedure is ‘computing’, whether anyone understands it or even observes it or not.
In the extreme information theory model, everything is information, and all structures in the universe are the result of computation, from the quantum level on up. Complex systems made up of complex systems, ad infinitem, starting with calculations going on in the quantum regime that produce particles.
Then you have the problem of ‘emergence’. We see emergent properties all around us. They must have been created through a process of computation at some level. Ants maintain a fairly exact, fixed temperature in their breeding rooms through an emergent process. So do Honey Bees. A bird flock is a complex structure with emergent patterns created through the iteration of simple rules - an algorithm. The shape of the bird flock can be said to be computed, and we are good enough at understanding the algorithms that we can re-create them easily in computer simulations.
Maybe I’m just thinking too concretely, or maybe it’s a definition problem. But I just can’t see how something can be said to be not computing because I don’t understand what it’s doing, but the minute I do suddenly the thing I was looking at IS computing, even though nothing about it changed. It still ran its algorithms. It still consumes energy as information theory predicts. So that makes no sense to me. But then, I think ‘computing’ is a physical process just like entropy or things falling in a gravity field, and an observer is not needed at all for any of it.
It becomes a problem when you try to figure out how to create consciousness with a computer because the first assumption is typically that a specific set of transformations to internal state due to input would create consciousness and we just need to figure out what those transformations are. But because the same transformations can represent different things to different observers, it means that we are required to also have a specific interpreter for our system to be considered conscious, but that interpreter is external to the system so that means the system itself isn’t conscious.
You are assuming that consciousness must be designed, or that we must know how to create it or what steps are required to create it.
But if consciousness is an emergent property of a sufficiently complex system of neurons and other structures, why does that matter? It’s still a result of computation, just as a neural network matching letters or faces is an act of computation.
My personal opinion is that if we ever get a ‘conscious’ computer, the consciousness will emerge through no design of our own. And we probably wouldn’t even recognize it as such. It will be an iterative, evolutionary process that we don’t understand.
Backing up a little bit:
This issue stems from the debate about whether consciousness can be created on a computer or not (i.e. does it depend only on computation). Can we duplicate the conscious experience we have where we are aware of our own existence etc. on a computer.
To answer that question forces an analysis of what is really happening with computation, which exposes this issue that transformations of numbers can be interpreted in many different ways, the transformations themselves aren’t enough to identify which abstract function is being performed (because it can be one of many).
Which leads to the question:
how do we link the function of consciousness (if it is just a function) to a specific set of transformations?
HMHW’s answer: we can’t because it’s not just a function (i.e. not just computation), it requires more than just the transformation of numbers
Regardless of how you arrive at it, you run into the same problem, that any program running on a computer can be said to be computing any one of a number of different functions (just like HMHW’s simple example box/circuit).
You are left with two options:
1 - Every function that the particular program can be interpreted as computing is equally conscious
2 - None of them are conscious
Consciousness is required to assign meaning to computation. Computations don’t have any intrinsic meaning apart from what is assigned to them by consciousness.
I wouldn’t be among those, by the way; a key element of the theory is to present a mechanism (the von Neumann process) that’s both necessary and sufficient for conscious experience.
But in practice, I’m completely happy with purely behavioral evidence.
Yes; because I happen to think it’s an important, or at the very least interesting, question—in part exactly because of the ethical implications of whether we judge something to be conscious.
Well, do you know what you believe yourself? As a reminder, the question was:
Of course, because what we take a program to be is defined by what we use it to compute. Think about it in this way: you have a certain question that you want to know the answer to. You fire up a program, enter the question, and out pops the answer. That program, then, will be a program capable of answering that sort of question.
Alice and Bob pose different kinds of question to the device, and get different answers out of it. Alice asks questions about sums, Bob asks questions about the value of his function for certain inputs. Both use exactly the same interface, in exactly the same way—i. e. there’s no ‘compute Alice’s/Bob’s function’ switch.
Hence, Alice claims the device to implement the program for addition—in exactly the way we usually take a calculator to implement a program for addition—while Bob, on exactly the same justification, claims that it executes an entirely different program.
So do you, or don’t you, believe that a calculator computes sums? Do you think that’s just a conventional assignment, merely a name? If so, how did that name come about?
Of course. Every time somebody has built a device like Alice’s, which is a common high-school exercise, and called it an ‘adder’, they did speak exactly that way. Hell, anytime somebody calls something an ‘and’ rather than ‘or’-gate, they speak that way.
It depends on how you mean ‘interpreting’ in this way. If each the three of you look at the computer at the same time while it’s running the program, perhaps directly at the voltage patterns being manipulated, and consider it to be you engaging in each of these three activities, then yes, that’s essentially the same as my example, only with ‘you playing basketball’, ‘you playing football’, and ‘you playing badminton’ substituted for Alice’s, Bob’s, and Charlie’s functions.
The reason I’m being cagey here is because you speak of 1’s and 0’s; but of course, there are no 1’s and 0’s anywhere in a computer—that is itself already a layer of abstraction, interpreting perhaps ‘high’ and ‘low’ voltage in an appropriate way. There’s nothing about, say, a 5V level of voltage that makes it mean ‘1’ anymore than a 3V level does, so one could validly assign different 1’s and 0’s to the same voltage patterns.
(I’m collecting my replies to some posts to try and strike a balance between length and number of posts I make; I hope that doesn’t cause confusion.)
Well, that’s only the distinction between Alice and Bob—there, a simple bit flip is used to create the different computations. But Charlie’s interpretation (which is there precisely because some commentators have complained that the bit flip seems too trivial an operation to get genuinely distinct computations, with bit-flipping merely exposing a duality between certain functions) is different: he changes not just the values assigned to the switches and lamps, but also, how these values are further translated into numbers—by reading them from right to left. Many other such distinct interpretations are possible.
I don’t really think they aren’t. Like anything, they compute if they’re interpreted as computing; so it’s perfectly fine to say that the slime mold computes the shortest path between two cities, but doing so requires to interpret, say, nodes on a graph as standing for cities.
Consciousness can also occur, in principle, in a digital machine; but not because of the computation that machine implements (but, if you want to follow my idea, because the von Neumann process must be implemented within it in some way).
You would know that, but could—in the general case—anybody that finds your device ever figure it out? If not, then it seems you’re claiming that what a machine computes depends on the intentions of the designer, which would be an uncomfortable starting point for a (naturalistic) theory of consciousness.
Of course, that there is such a function is exactly what I argue against, on the basis that absent a proper interpretation, there’s no objective fact of the matter regarding what a system computes at all, and hence, computation can’t ground our mental abilities—as it depends on one of them, namely, interpretation.
I would not quite put it that way. Consciousness—or rather, interpretation—does not assign meaning to computation; the meaning, in an appropriate sense, assigned to a given physical evolution (traversing a certain succession of states) is the computation. So Alice and Bob, looking at the same box doing the same stuff, will assign different computations to it, by essentially considering ‘switch up’ to mean ‘1’ rather than ‘0’, for instance.
No, that’s not it, I’m afraid. I don’t need to know any reason, specific context, or whatever, to decide whether something computes. Perhaps we should go through the example in detail.
See, Alice has built this box. It has, on its front, four switches (S[sub]11[/sub], S[sub]12[/sub], S[sub]21[/sub] and S[sub]22[/sub]) and three lamps (L1, L2 and L3). Imagine something like this:
Because sometimes, people think it matters what the adder looks like on the inside, here’s the wiring diagram (the A-gates are XOR, the B-gates AND, and the C-gate OR):
To compute, you set up a certain pattern of switches, and observe the resulting pattern of lights. For simplicity, we may assume that there’s a time lag between flipping the switches, and the lights coming on, or that there’s another ‘go’ button that has to be pressed, so that the device always starts with all lights off, and settles into its output state at a later stage.
So say we start with this initial state, where, as before, ‘u’ means ‘switch up’, ‘d’ means ‘switch down’, ‘x’ means ‘light off’, and ‘o’ means ‘light on’:
____________________________
| | |
| | u | d | | |
| | | x| x| x| |
| | d | u | | |
|_______________|____________|
After performing the computation, the final state will be:
____________________________
| | |
| | u | d | | |
| | | x| o| o| |
| | d | u | | |
|_______________|____________|
Now imagine we cycle through all possible input combinations, and note down all the outputs (that’s what you’d do if you were to investigate the alien box). If you care about that sort of thing, you’re even allowed to open up the box, and measure the voltages that are being applied at each juncture in the diagram above; I’ve noted them alongside each row.
This’ll yield the following table (‘h’ meaning ‘high’, ‘l’ meaning ‘low voltage’):
S11 S12 | S21 S22 || L1 L2 L3
-----------------------------
d d | d d || x x x
d d | d u || x x o (A1 yields h --> L3 o)
d d | u d || x o x (A2 yields h, A3 yields h --> L2 o)
d d | u u || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
d u | d d || x x o (A1 yields h --> L3 o)
d u | d u || x o x (B1 yields h, A3 yields h, L2 o)
d u | u d || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
d u | u u || o x x (B1 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
u d | d d || x o x (A2 yields h, A3 yields h --> L2 o)
u d | d u || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
u d | u d || o x x (B2 yields h, C1 yields h --> L1 o)
u d | u u || o x o (B2 yields h, C1 yields h --> L1 o, A1 yields h --> L3 o)
u u | d d || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
u u | d u || o x x (B2 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
u u | u d || o x o (A1 yields h --> L3 o, B2 yields h, C1 yields h --> L1 o)
u u | u u || o o x (B1 yields h, A3 yields h --> L2 o, B2 yields h, C1 yields h --> L1 o)
Now, here you come in (or anybody else who wants to play). You hold that computing is a physical act; the above exhaustively describes what the box does, physically. So, tell me: what does the box compute? If you can’t tell, can you tell me a way how you’d find out?
It’s a little different from that. In fact, the conclusion here is that all information must be physically embodied—the demon needs to set some physical system into a certain state to designate some information; the re-setting of these systems is what costs energy. So it’s really the fact that the demon does not have access to an infinite reservoir of physical systems to use as memory that saves the second law.
The ‘memory’, here, are correlated physical systems; it’s undoing that correlation, effectively, that leads to energy dissipation, because you remove a certain constraint from the total system (every correlation can be considered as the fact that out of the total possibilities of states, only a subset can be realized due to certain constraints).
This sort of thing, taken to the extreme, is called ‘ontic structural realism’—essentially, the view that what I’ve been calling ‘structure’ is really all that exists, with no need for any intrinsic properties that bear this structure.
On the face of it, it’s a silly claim: what does it mean to say that, say, the relation ‘taller than’ exists when there are no concrete particulars such that one actually is taller than another? But there are some serious arguments behind it, and lots of interesting discussion. However, I don’t think that this view—attractive though it may be—will ever succeed, exactly because the challenge raised by Newman’s problem—and everything I’ve done in this thread so far is essentially just rephrasing that problem in various ways—hasn’t been met, and indeed, can’t be met. Structure simply underdetermines content.
Why? The wetness of water is emergent; no single water molecule is wet. But to say that water ‘computes’ its wetness seems frankly somewhat odd.
You can describe bird flock patterns by means of a simple algorithm; that doesn’t mean that’s what they are—that’s just the old confusion of map and territory.
If I have a three-liter bucket, and two buckets of unknown size, and I empty out those two buckets in my three-liter bucket, and find that it overflows, then that means that the sum of both the contents of my buckets exceeded three liters; but it doesn’t mean that nature carried out a computation to see whether it has to let the bucket overflow. I can use the system to perform this computation, but the system itself isn’t doing any computing, it’s just following the laws of physics.
And that’s exactly what I claim—or aim to demonstrate, by example—isn’t the case: it doesn’t run any algorithms, without anybody interpreting it as doing so. It just follows the laws of physics; everything else is an interpretational gloss layered on top of it.