Re-posting due to trying to understand the boundaries of computation from your perspective wolfpup.
I believe from HMHW’s perspective none of those by themselves are computations until there is an interpretation regarding the symbols and the transformation, just trying to get a handle on if you agree or disagree with that, and if there is a difference between those scenarios.
Before I retired I put together a system that gathered manufacturing data from halfway around the world, downloaded it, processed it, and built web pages based on it. Since I built them whenever we got new data, three times a day in some cases, often no one ever looked at the web pages built.
I wonder if HMHW agrees that my program did computations, despite no one being there to interpret the results.
Ok, I’ve now caught up on this thread. It took a while to read everything.
Is there anyone here who actually understands the argument from HWHM well enough to re-explain it to me? Is there anyone who agrees?
I’m honestly not sure what it has to do with a brain either. A few lights and switches? At best that sounds like a small computational unit. The question of what it computes is going to depend on it’s input. The computer program. We’ve talked a lot about what the hardware setup might be, but what’s the actual information flowing into the brain and out of the brain? The brain has physical inputs and physical outputs. It’s programmed with a complex set of software that’s built into the system and that constantly evolves itself based on new inputs.
So we have a bunch of sensors converting signals into the brain. Some kind of computational loop. Consciousness is just the top level control of that loop, the program that decides the importance of the results from the rest of the units in the brain. That program itself can be more or less complex (see a human vs a cat) but it’s the same thing.
I don’t see any reason why the computation part in between the inputs and the outputs can’t be a computer program that simulates the brains computation. Use a video camera to gather information and flow them into a simulated cortex. It does it’s computations, simulated or real and then sends signals to the output (e.g. muscles).
Why are we positing the need for some external observer? It seems superfluous to the whole thing. Layers of neural nets with inputs/outputs and some great software that has evolved over many generations is what makes a brain.
Rice’s theorem applies to non-trivial properties of computable functions. Non-trivial properties are all properties that don’t apply to all functions in the class, i. e. those properties that can be used to single out a given (set of) functions. The non-trivial properties in my example would be something like ‘computes the sum of two input numbers between 0 and 3’.
This is true, but I don’t see that my argument depends on it. As long as you’ve got some fixed, algorithmic way of trying to decide whether a given piece of hardware implements a terminating procedure (or any particular procedure, in fact), there will be some pieces of hardware it doesn’t work on. And if you’ve got something else, then computationalism is already wrong.
Sure, but the point was that comparing the input/output behavior suffices.
Well, in principle, only after interpretation is there a well-defined output domain. I could just as easily interpret my box’s lamps to represent bits of different value, or not bits at all. Then, the codomain of the function being computed varies.
But really, the important point here is that the codomain isn’t lamp states; the output of the function isn’t something like ‘on, off, on’—that’s its physical state, and again, conflating physical and computational entities just ends up trivializing computationalism. Rather, the codomain is given by what those lamp states represent.
Of course a functional transformation leads to a different function. Provided a suitable cardinality of the domains, there always exists a function f’’ for any two functions f and f’, such that f’ = f o f’', where the ‘o’ denotes function composition. That doesn’t make f and f’ equal; if it did, then again, all functions with the same domain and codomain would be equal. My adding 1 is just such a transformation.
I appreciate that it’s a long thread, but I’ve pointed out the issue before, and we’re not gonna make any progress here if I just keep repeating myself over and over.
If we thus identify all the computations a system performs with its physical evolution, which is what all this considering different functions to be really ‘the same’ inevitably leads to, then computationalism just looses everything that makes it a distinct theory of the mind, and collapses to identity theory (not to mention that by then, we’ve long since abandoned any semblance between the notion of ‘computation’ used in that context and the usual notion of computation).
I have no idea what ‘the lamp is the interpretation’ is supposed to mean. The lamp is just a convenient visualization of the output voltage level, because we can’t see that with the unaided eye. If you ask your research assistant for the outcome of a given computation, would you be happy with a report of the state of the output register?
The lights are just intended as a convenient visualization of the internal state. We can’t directly read ALU outputs. But for the purposes of the argument, it’s entirely irrelevant if we assume that we can. So now the output is a pattern of high and low voltages. What has been computed? Voltages? Or do you hold that arithmetic has been done? But then, how do the voltages connect with arithmetic?
I seriously don’t get what your issue is, here. I agree that the CTM holds literally that the mind is a computer. Incidentally, if there’s no room for non-computational aspects of the mind, then that means you’re not a proponent of CTM:
But no matter. My point is that CTM, as such, is wrong, but that the use of computational modeling in cognitive science is independent of its truth. (If, on the other hand, you’re merely saying that CTM is the currently accepted, dominant paradigm in cognitive science, then I readily agree—I already acknowledged that in the post where I claimed Putnam dismantled it, it’s just that the rest of the world is slow on the uptake; but lots of people believing something doesn’t make it right.) Just like the notions of entropy etc. didn’t need to be revised once thermodynamics was lifted from its dependence on caloric and put upon a solid foundation in statistical mechanics, computational models of cognition can still yield valuable insights even if the mind as a whole isn’t literally a computer.
Right. So suppose that somebody had proposed the CTW, the theory that the weather literally is the computation performed by the atmosphere. We can consistently reject that theory and believe that computational modeling of the weather is useful. That’s the same thing I’m doing with the CTM, plain and simple. There’s simply no reason to suppose that anything of the successes of cognitive science needs to be thrown out upon repudiating CTM; a computational model of vision won’t cease working when the researcher stops believing that the mind literally is a computation.
You’re conflating evidence and argument. The CTM is a philosophical stance; it’s a metaphysical hypothesis, and as such, not (directly) empirical (although of course, necessarily subject to revision once empirical discoveries make a metaphysics in contradiction with it more attractive). It must stand or fall on its own internal logic; and one sound argument against it is all it takes. There’s no ‘weight of arguments’ that needs to be considered, so even if the argument is ‘silly’, if it’s right, the CTM is wrong, and that’s that.
That isn’t the impression I got, say, from quotes such as this one:
There, you seem to be claiming that all the box does, by way of computation, is producing lamp patterns. But no matter, I’ll take you on your word that what you meant is ‘it computes all the possible functions you can get via interpreting its inputs’.
But then, there remains the matter that no system ever just computes, say, the sum of two inputs. In CTM, as usually understood, the mind is a program in analogy to one that computes the sum of two numbers—in analogy to one (but not both) of my functions f and f’. Again, otherwise, the mind would just be a given neuron firing pattern—but that’s just the identity theory.
This view is also somewhat in tension with the view you articulated earlier:
On that view, my f and f’ are different computations, since there are different TMs (over the alphabet of decimal numbers) performing them.
So before I respond to it in detail, let me try and see if I get the gist of your objection. The idea, it seems to me, is that the box basically just provides a kind of engine, which does the computational work; different users can come to that engine, and use the work it performs to solve distinct problems. So you propose to identify ‘computation’ with the work the engine does, and, by extension, ‘mind’ with the same sort of computational work a brain does. Is that an accurate summary?
This is actually possible, as the brain uses electrical impulses to stimulate certain areas of the brain where different types of memory and emotional states are stored.
Emotional states are nothing more than reactions to stimulus chemically induced by your brain functions…which are turned into electrical impulses that cause reaction in your mood and body.
**The stated electrical impulses have nothing to do with man-made electricity, these are naturally occurring stimuli from your brain…for those of you who never paid attention in biology class.
That being said…
Computers and their parts are nothing more than electronic gizmos for creating and storing electrical impulses.
So the problem with this is—
It should be easy enough to store these electrical impulses, but the methods used to extract them are not sufficient enough at this point in time.
Extracting these impulses while a body is going through the dying stage is not something that would translate into a proper download, it would be corrupted from the changes the body is going through while dying. Therefore any outcome from storing said impulses for later use would be catastrophic and simply unusable.
And even though science is advanced enough to store these brain impulses, it is not sufficiently advanced enough to understand how to create a program to differentiate the billions of different impulses the brain gives, and separate them into memories, feelings, emotions, etc… At this point it would all just be a jumble of electronic impulses.
And even at that, human science still doesn’t understand completely how the brain works.
If we look at the question of downloading or simulating consciousness, we must be able to understand how to create consciousness in the first place to be able to download or simulate specific instances.
One argument that has been put forth in the past is that consciousness/mind can be modeled/created with computation.
I believe the definition of computation includes the requirement that we understand or interpret the symbols involved. In other words, if you just take a physical system that is performing transformations, that by itself is not really what they mean by computation. Computation is the next level of abstraction on top of that where we have assigned meaning and value to the symbols and transformations involved.
So computation, I believe, is independent of the physical implementation, as long as the physical implementation can be mapped correctly to the set of symbols and transformations.
So HMHW pointed out that there is a problem:
1 - Computation really does require interpretation because the same setup of inputs+transformations+outputs can map to multiple different valid sets of computations (meaning it could be doing a math problem, or it could be translating chinese, or it could be modeling traffic flow)
2 - If computation requires interpretation to determine which computation is actually being performed, then how can we say the our computational model running on the computer will really create consciousness, if we interpret it one way it could map to something like a tornado simulation and if we interpret it the right way we get consciousness, but how does the running program know which way it’s supposed to interpret it. How does the running program know that it’s not running a tornado simulation and that we want it to be conscious?
I don’t personally have a strong belief about any of these arguments and positions because it’s a difficult problem and there are valid pros and cons all over the place. I’m trying to learn and understand.
Having said that, I don’t see how to get around HMWH’s argument. I used to gloss over these problems and kind of blindly assume a computer could create consciousness no problem because the brain is just transforming stuff, but the more I’m exposed to these types of debates, the more it becomes clear that it’s a difficult problem with no obvious answer.
The impulses represent the current processing, it doesn’t hold all of the stored information (memories etc.), you would need to extract physical structure (synapses), epigenetic alterations that happen in cells to due to learning to properly model the maintenance of the synapse over time, you would need to figure out how to extract the learned temporal sequences from within purkinje cells, and many many more details that would influence the simulation.
No, it couldn’t. Or, to put it more precisely, if the same mapping of inputs to outputs solved all three problem classes simultaneously, then they are all computationally equivalent by definition. Neither problem is harder than any other, or takes longer to solve, or is different in any other discernible way, because they are (computationally) all exactly the same problem. This is not, however, the kind of fortuitous coincidence one finds in real-world systems of non-trivial complexity.
This is not correct, at least in terms of what we understand about reality and how information processing works.
Imagine for a moment that you have written a very large, elaborate program that, on a fast enough computer, can create a precise real-time simulation of the Battle of Agincourt, down to the last rat digging in the French army’s supply wagon. You have tens of thousands of processes covering the actions of each of the participants, the weather, the field conditions, the horses, the flights of arrows, all in such excruciating detail that you can watch it transpire and make changes that might affect results, and even change the overall outcome.
Now go inspect the object code of the program. Over here is the central process that manages all those thousands of other processes. What, exactly, is it doing?
It is taking symbolic data and handling it in a prescribed manner, as directed by the user. It has no special “knowledge” of what those data symbols mean, and really, is only the “top-level” process in the source code hierarchy. In fact, many of the lower-level processes are significantly more sophisticated than the central process.
And when you look at the object code, its composition is uniform. All of the processes all doing fundamentally the same thing: moving data around and adjusting it in a prescribed and directed manner. There is no way to say that any one process is logically superior to another.
In other words, in terms of the computational theory of the mind, the locus of self-awareness cannot be identified, because the brain, like or computer, has an underlying uniformity to its composition. There does not appear to be a single process, or even a small cluster of core processes, to which what we know as consciousness can be ascribed.
That makes no sense. If it were true, virtually all proponents of CTM in cognitive science could be dismissed as not really proponents of CTM. To quote Fodor more fully (from the introduction to The Mind Doesn’t Work That Way):
[The computational theory of mind] is, in my view, far the best theory of cognition that we’ve got; indeed, the only one we’ve got that’s worth the bother of a serious discussion. There are facts about the mind that it accounts for and that we would be utterly at a loss to explain without it; and its central idea – that intentional processes are syntactic operations defined on mental representations – is strikingly elegant. There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition.
But it hadn’t occurred to me that anyone could suppose that it’s a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works.
Again, it has nothing to do with “modeling” metaphors. It should be clear enough from its definition as syntactic operations on symbolic representations that CTM refers to a literal computational paradigm as an explanatory theory of mental processes. Even in fields like computational neuroscience, where computational modeling is used extensively, the models are only useful to the extent that they can be empirically validated through psychological or biological experiments.
That’s flat-out wrong, once again. I’m conflating nothing. CTM is not some vague “metaphysical hypothesis”, it’s an explanatory theory grounded in experimental evidence. For example, evidence for the syntactic-representational view of mental imagery as opposed to the spatially displayed or depictive models.
Geez, you don’t have to take my word for it, I explicitly stated it earlier, right here: “… the ‘computation’ it’s doing is accurately described either by your first account (binary addition) or the second one, or any other that is consistent with the same switch and light patterns. It makes no difference. They are all exactly equivalent.”
I’m not sure what new sleight-of-hand you’re trying out, but I don’t understand what you mean by “alphabet of decimal numbers”. If what you’re trying to imply is that the computations are different because they produce different numerical results, this is a circular argument referring back to the very issue we’re debating about assigning semantics to the symbols, wherein I’ve already addressed multiple times the question of why the computations are self-evidently exactly the same.
The first part isn’t wrong, it’s just a strangely bizarre way of looking at it, since we rarely think of computations as general-purpose “engines” applicable to multiple classes of problem according to the semantics we assign to the symbolic outputs. We don’t think of it that way because, aside from your trivially contrived example, it doesn’t actually happen in the real world in non-trivial systems.
And I don’t really see how that connects with the second part, expressed in the last two sentences. My contention is that CTM is a valuable explicatory theory for many cognitive processes, and that the homunculus argument is a ridiculously frivolous metaphysical objection to one of the most foundational and empirically grounded theories of cognition. Quite frankly, I feel the same way here as I might when trying to discuss the details of a paper on climate change projections with someone who turns out to be a hardline denialist and holds the belief that there’s nothing to project because CO2 has absolutely no effect on climate. In fact Fodor put it extremely well, taking it for granted after a career dedicated to the study of cognition that without CTM there is no theory of cognition “that’s worth the bother of a serious discussion”.
In a circuit with state comparing I/O behavior is impractical if not impossible. You can prove equivalence if you can see inside by partitioning the circuit into memory and non-memory parts.
There are two output domains - that of the ALU itself and that of the lamps.
Do you have a different calculator if the lamps are programmed to display roman numerals? Greek or Hebrew letters? You are combining two different functions here - the mapping from ALU inputs to outputs and the mapping from outputs to lamp states.
IIRC, translformations are special cases of composition and are not the same as composition in general.
Hell yes. People spend millions of bucks on machines which do the trivial to you task of taking the output of a computation and making it accessible. What voltage level represents a 1 and what a 0? When can you decide what the output voltage is? Even in a chip the output of a computation must go through a buffer to make it visible to the outside world. The mapping from output voltage to the convenient visualization is much more complex than you seem to think.
Let’s say I answer arithmetic. I run an addition on my calculator, and see an answer. Then I unplug the lamps, and enter the same inputs and function. I see nothing. Are you saying that arithmetic is not being done any more?
This is a non-trivial issue. Does a brain that has been disconnected from being able to output anything still thinking? If Hawking could no longer even blink, would he still be thinking?
Now, I might not answer arithmetic, since that is presupposing an interpretation, whereas it would be more accurate to just give the mappings of binary numbers, and even more accurate to give voltage values and a timing diagram. The interpretation of voltage levels as just 1s and 0s becomes a bit dangerous when you are talking about high speeds, low voltages and very small feature sizes.
Could you provide the definition you’re referring to here? The only definition of computational equivalence I know is the presence of a polynomially-complex reduction between two problems, but that’s not something that’s gonna help here. It’s also at variance with your earlier stance that two computations differ if the TMs that perform them differ (i. e. if their machine tables differ).
You should maybe expand your reading beyond Fodor. The simple issue is, if the mind includes aspects that aren’t computational, then it can’t be true that the mind literally is a computing system. Yet, the latter is a claim that’s accepted by most (practically all) proponents of computationalism.
I agree that that’s the claim CTM makes, but CTM is false, so why should I be beholden to that claim? You can’t seriously hold that CTM must be right, since it’s the only game in town; and if there’s thus room for CTM to be false, then it must be the case that a future theory replacing it will still have to account for CTM’s successes, or more accurately, for the successes of cognitive science achieved while holding CTM to be the dominant paradigm, just like how any science succeeding caloric theory still had to account for caloric theory’s successes. And on that future paradigm, it won’t, of course, be true that the mind literally is a computing system, and hence, it’ll have to explain these successes by the fact that non-computational systems can still be computationally modeled. Unless you want to argue that because it’s the current dogma, it must be right, you must allow for the possibility that its claim of literal identification of the mind with a computer could come out wrong.
Besides, it’s not nearly so clear-cut a case as you (by proxy of Fodor) make it out to be that CTM is ‘the only game in town’. I’ve already pointed to IIT as an example of a ‘scientifically respectable’ theory on which CTM is straightforwardly false (another one of these points you keep ‘missing’), and there are many other approaches, some of which may be compatible with CTM, but none of which are wedded to the claim that the mind is a computer—Friston’s free energy minimization and other Bayesian/predictive coding approaches, Edelmann’s neural darwinism, Baars’ global workspace, higher-order thought theory, and so on are all approaches that may be compatible with a computational brain, but that don’t really depend on it; models by Penrose/Hameroff, and Bringsjord/Zenzen, explicitly deny the possibility of a computational mind.
CTM is a hypothesis on the nature of mental states and properties—namely, that they are functional, more accurately, computationally functional, in nature. As such, it stands in conflict with theories on which mental states/properties are non-physical, or physical, but non-functional, or intrinsic, or neutral, and so on—thus, as a claim on the ontology of mental states, it’s explicitly metaphysical.
I’m merely striving to reduce the wriggle room. You did earlier on claim that a different Turing machine means it’s a different computation, so I point out that my two functions are realized by different Turing machines (which take numbers in the decimal system as input on their tape, and output numbers in the decimal system—hence, ‘over the alphabet of decimal numbers’). You now seem to posit some sort of equivalence between different Turing machines which so far seems to boil down to ‘whatever HMHW says is different, in fact isn’t’; hence, I’m trying to tease out your actual meaning there.
The thing is that I’ve provided two functions which are manifestly different computations on any formalization of computation I’m familiar with, which, however, are ‘self-evidently exactly the same’ to you. So all I’m doing is trying to get a grip on what, exactly, the word ‘computation’ means to you. So maybe start there: what, exactly, are computations, and how are they individuated? When do I know that one computation is different from another?
I won’t respond to your attempt at trying to slander me as equivalent to a climate change denialist just because I have the gall of disagreeing with your favorite cognitive science paradigm.
It’s indeed impossible: Moore’s theorem implies that you can never obtain its exact functioning by mere experimentation on the box.
You have a page of English text. Is a page of Hebrew text the same, or different? Of course, as such, the question isn’t answerable: one could be a translation of the other. But in general, a difference in the alphabet makes a difference in the machine function.
My point is a different one, though. Say you have a page in an unknown alphabet, written in an unknown language: is there a single thing it can mean? That is, is there an effective procedure for deriving its meaning?
Of course, there can’t be. You can interpret it various ways, using for instance a one-time pad key to translate it into a language you’re familiar with. But which is the right meaning? There simply is no fact of the matter.
It’s exactly the same with computations. Sure, there may be an intended meaning, and likewise, an intended computation; but that can’t well be a criterion for what computation a system performs (and much less for whether a system instantiates a mind). So I propose that a system computes whatever you can use it to compute; that’s reasonably simple, and covers every case of actual computation that is performed. It’s just that it doesn’t accord with our intuition that there ought to be one thing, and one thing only, that really is computed by a system. But that intuition is the same as that there’s one thing, and one thing only, that the word ‘gift’ really means.
As a child, I always used to wonder why other people bother with foreign languages. I mean, wouldn’t they have to translate it into German in their heads to actually understand it, anyway? (Fodor seems to have had a similar intuition, hence, his invention of ‘mentalese’, a language that brains just understand.)
But of course, that’s nonsense. And it’s the same nonsense as saying that there must be one language that a device speaks, when it computes. There’s only symbols interpreted a certain way.
OK, so can you give me any hard-and-fast criterion on when two computations are the same, and when they differ?
I’ve been trying to say this all along. If you take my laptop example, I type FISH and FISH is displayed on the screen. This is all the system does. The very same computations that happen inside the laptop might be capable of being used to translate chinese, or control traffic; but this does not happen. Only the word FISH appears, because of the way the laptop is designed. The other interpretations are irrelevant, because they do not get displayed.
(Unless there is by chance a word ‘FISH’ in chinese which also coincidentally means FISH in english, or the the word FISH can also be a traffic control strategy that traffic controllers instantly understand. Flow Implementation in School Holidays, perhaps?)
But this does pose a problem for the ‘simulation of consciousness’ concept. If all the neurons in the brain are busily performing calculations that are ambiguous, we can’t tell what they actually representuntil we simulate the entire brain/body system and hook up all the inputs and outputs as well. If a hypothetical simulation of a brain is connected to an artificial voicebox, and starts talking about FISH as expected, that suggests the simulation is working correctly; if it starts talking about traffic control or declensions in chinese, something’s wrong.
I would note however that the human brain is very plastic and does seem to be capable of self-correction to a certain extent, so is robust enough to accommodate a wide range of failure modes. Otherwise electroshock therapy, lobotomy or taking psychosomimetic drugs would simply randomise the data, and cause the whole system to stop working.
Simple definitions and examples with agreement between parties sets the foundation for the next level up of discussion to see where there is validity and where there are problems.
For example, when asked about gravity in the simulation, begbert2 clearly defined his position. I haven’t gone back to that point yet, but at least myself or anyone participating in the thread knows exactly where he stands and can respond accordingly.
We should have a clear definition of computation that should allow any of us to identify whether any specific example provided in this debate is considered a computation or not.
If true, then how could researchers accurately (better than chance) identify what the person was imagining just by monitoring the V1 through V3 visual processing areas and comparing to test symbol activation?
(V1-V3) areas
1 - Similar neural patterns for similar images, whether perception, working memory or imagery (Grated image)
2 - Image could be predicted based on neural activation better than chance
I challenge you to provide a cite of any research more recent than the year 2005 that shows that the V1-V3 areas are NOT activated during mental imagery. Scientists have learned a lot since the 70’s.
Note:
From my perspective, I think it’s naive to think the brain only has one way to solve problems. I think that evolution would have naturally made efficient use of existing machinery (visual working area, auditory working area, etc.) to solve some aspects of problems while also having other approaches to solve other aspects or types of problems (e.g. symbolic, logical, hard coded circuits and pretty much every other computing/calculating mechanism that it might stumble upon).
Computation was defined by Alan Turing in the specification of his eponymous machine, which can be simply restated here, for purposes of this discussion, as a series of discrete operations on symbols, defined by a set of rules, that have the effect of deterministically transforming a set of input symbols into a set of output symbols. By extension, computation can be deemed to be performed by a black box whose internal mechanism of operation is unknown, but which is observed to perform that same deterministic mapping for all possible inputs.
Thus, a Turing machine, or an implementation of one using logic gates, which takes as input any two digits say in the range of 0 to 9 and whose output is their product is obviously performing a computation, but a program which knows nothing about arithmetic and which implements what back in my day in grade school was a “multiplication table” and generates the answer by table lookup is also doing computation. Not only is it doing computation, but according to my criterion, it is doing a computation exactly equivalent to the former, because it produces exactly the same mapping for all possible inputs.
I trust that the above clarifies what I mean by computational equivalence. I’m unaware of anything I said previously that this is at variance with.
This is a controversial topic and it doesn’t have to be proven incontrovertibly true for my assertion that it’s based on empirical science to be accurate. This paper lays out the case for the computational theory of mental imagery, and that link also includes responses offering counterarguments.
OK, but then, how are my functions f and f’ not distinct computations? There are two TMs, one of which takes as input a tuple of two numbers (between 0 and 3) and returns their sum, the other takes the same input, but returns the function f’ as given in my table.
Indeed, the fact that both are given by different lookup tables—as explicitly provided—would, on the above definition, suffice to make them different computations. So what am I missing?
How on earth can you imagine that the lookup tables would be different?
The output of the box is a set of lights. If the transformation was being accomplished through a lookup table, the tables defining the f and f’ mappings would be exactly the same.
There are in fact an infinite number of possible interpretations of the box’s output, but this is entirely irrelevant to the nature of the computation.