Downloading Your Consciousness Just Before Death.

I don’t want to get sidetracked into a discussion of mental imagery here and, furthermore, I’m pretty sure we’ve been through it before and I’m not interested in doing it again.

CTM is absolutely grounded in reality based on extensive experimental results. You can’t make blanket statements like “understanding how the machine works” when there is a very deep hierarchy of different levels of implementation.

Regarding CTM and mental imagery, CTM makes a whole host of predictions that have turned out to be accurate, such as with perceptions of optical illusions, or rotating objects in mental images, all of which provide evidence that the image is being reconstructed in the manner of a computer rather than “projected” and viewed by an internal mind’s eye.

I gave you a good definition of what is meant by a “symbol” in the context of computational theory and even a quote from Fodor’s book on the subject.

I didn’t see this before, but I agree with it. Basically you’ve come to the same conclusion that I expressed in #98, that both interpretations of what the box is doing are exactly equivalent. The computation that the box is doing is objectively defined by the logic and wiring inside it. I like the perspective that the only thing the external observer is doing is attributing a name to it.

I forgot to mention in the first sentence of that post, for the benefit of other readers, that HMHW was basing his no-CTM argument on the homunculus fallacy, the position being that if a computational system requires an interpreting observer to be meaningful, then if the mind was computational it would need such an observer, too, and so would that observer, and so on in an infinite regression. But this is not an issue since the premise is false, which is a good thing, because otherwise it would overturn much of cognitive science.

In the sense that any function is just a mapping from inputs to outputs (domain to codomain), including computable functions. So if you change the mapping, you change the function; if you change the function, you change the computation. Anything else (see below) just collapses to calling the sequence of states a system traverses a ‘computation’, but that’s really not computationalism, that’s just type-identity physicalism (the claim that a given neural firing pattern just is identical to a certain mental state).

Not quite. It’s not about the fact that different computations can be implemented by the same system, it’s that having a system implement any computation needs an external agent to interpret the syntactic vehicles the system manipulates, and that thus, any attempt to base mind on computation lapses into circularity.

That won’t work, though. It’s true that you can take the inputs and outputs of the computation for binary addition, and apply some pre- and post-computation to them to obtain the value table for the function f’ (in complexity science terms, you can perform a reduction of one function to the other), but neither does that make them the same function, nor does this actually solve the problem. Because you have to appeal to further computation to perform this reduction, and of course, that computation faces the same problem.

So, symbolically, if f is binary addition, and f’ is the other function defined above, we can define two further computations I and O such that:

f’x = OfIx

Where ‘*’ denotes the composition operation. That is, you take your input vector x, translate it to an input vector Ix for f, apply f, and then translate the resulting output vector into the output f’ would’ve given if applied to x.

You see, you haven’t made any headway on the problem of implementation—much the opposite: where before, you couldn’t decide between whether f or f’ is the computation performed by D, now, you also have to decide whether a suitable system implements O and I!

But of course, for any system you claim does that, I can cook up an interpretation such that it computes some other function.

This strategy won’t help, either, on the account that it trivializes the notion of computation, such that it just becomes co-extensive with the notion of physical evolution of a system (and thus, computationalism just collapses onto identity physicalism).

For what could the ‘computation’ be, such that it can equally well be regarded as f and f’? After all, both are completely different, if considered as (partial recursive) functions. Writing down algorithms computing either, they, likewise, would come out totally different. They’re implemented by different Turing machines, and so on. On any of the usual notions of computation, thus, they’d come out squarely different, and similar only in as much as they have the same domain and codomain.

In fact, we’re seeing an echo of Newman’s famous objection, here: if we’re willing to consider these two functions to be the same ‘computation’, then a ‘computation’ is just a specification of a domain and a codomain, as we can transform each of the functions defined over them into one another by means of an example such as the one I gave above. So ‘computation’ would simply be a specification of possible inputs and outputs, without any further notion of which inputs get mapped to what outputs—which of course goes contrary to every notion of what a computation is, as it’s exactly which inputs map to what outputs that usually interests us.

But that’s really getting ahead of ourselves a bit. To satisfy your contention, we’d have to find what’s left over once we remove any mapping to inputs and outputs. What remains of the computation once we stipulate that f and f’ should be ‘the same’.

The answer is, of course, not much: just flipped switches and blinking lights. Because if we strip away what individuates the two computations, all that we’re left with—all that we can be left with—is just the physical state of the system. But if that’s the case, then what we call ‘computation’ is just the same as the system’s physical evolution, i. e. the set of states it traverses.

Then, of course, nothing remains of computationalism (that distinguishes it from identity physicalism). Then, you’d have to say that a particular pattern of switches and lights is identical to a ‘computation’, and, by extension, a mental state.

So, if you want f and f’ to just be ‘different names for the same computation’, computationalism looses everything that makes it a distinct theory of the mind, and collapses onto identity physicalism, whose central (and, IMO, untenable) claim is just this: that a given pattern of switches and lights just is a mental experience.

We can imagine enhancing D with whatever it is that enables an agent to individuate the computation it performs to either f or f’. Say, we just tack on the relevant part of brain tissue (perhaps grown in the lab, to avoid problems with the ethics committee). Would we then have a device that implements a unique computation?

And of course, the answer is no: whatever that extra bit of brain tissue does, all I can know of it is some activation pattern; and all I can do with that is, again, interpret it in some way. And different interpretations will give rise to different computations.

It wouldn’t even help to involve the whole agent in the computation. Because even if that were to give a definite computation as a result, the original conclusion would still hold: we need to appeal to an external agent to fix a computation, and hence, can’t use computation to explain the agent’s capabilities. But moreover, what would happen, in such a case? I give the agent inputs, perhaps printed on a card, and receive outputs likewise. The agent consults the device, interpreting its inputs and outputs for me. So now the thing just implements whatever function the agent takes it to implement, right?

But that’s only true for the agent. Me, I send in symbols, and receive symbols; but it’s not a given that I interpret them the same way the agent does. I give him a card onto which a circle is printed, that they interpret as ‘0’, but by which I, using a different language or alphabet, meant ‘1’. So this strategy is twice hopeless.

I still don’t really get what your example has to do with mine. Do you want to say that what conscious state is created isn’t relevant, as long as the behavior of the system fits? I. e. that no matter if I see a Tiger planning to jump, or hallucinate a bowl of icecream, I’ll be fine as long as I duck?

If so, then again, what you’re proposing isn’t computationalism, but perhaps some variant of behaviorism or, again, identity physicalism; or maybe an epiphenomenalist notion, where consciousness isn’t causally relevant to our behavior, but is just ‘along for the ride’. Neither of them sits well with computationalist ideas—either, we again have a collapse of the notion of computation onto the mere behavior of a system, or what’s being computed simply doesn’t matter.

I don’t assume the interpreter, I pointed out that without one, there’s just no fact of the matter regarding what computation a system implements. The argument I gave, if correct, derives the necessity of interpretation in order to associate any given computation to a physical system.

Also, none of this threatens the possibility or utility of computational modeling. This is again just confusing the map for the territory. That you can use an orrery to model the solar system doesn’t in any way either imply or require that the solar system is made of wires and gears, and likewise, that you can model (aspects of) the brain computationally doesn’t imply that the brain is a computer.

I’ve pointed out above why this reply can’t work. Not only does it do violence to any notion of computation currently in use, trivializing it to merely stating input- and output-sets, but moreover, the ‘computationalism’ arrived at in this fashion is just identity physicalism.

So if you’re willing to go that far to ‘defend’ computationalism, you end up losing what makes it distinct as a theory of the mind.

To the contrary—a richer set of behavior makes these games even easier, and the resulting multiplicity of computations associated to a system (combinatorially) larger. The appeal to semantics here is, by the way, fallacious, since what I’m pointing out is exactly that there is no unique semantics attached to any set of symbols and their relations.

Emergence is only a contentful notion if you have some candidate properties capable of supporting the emergent properties. Otherwise, you’re appealing to magic—something unknown will do we don’t know what, and poof, consciousness. That’s not a theory, that’s a statement of faith. But we’ve been down this road before, I think.

.Nope. There may be an infinite potential computations going on in my laptop, but only one is ontologically privileged; the one which produces the pattern of pixels on the screen. All the others are just garbage. You don’t need an ‘interpreter’ to find out what the outputs of my laptop are - a good camera will do.

All the other hypothetical computations are just noise, nonsense. Hidden in the digits of pi are digits that describe exactly the content of next week’s* Game of Thrones*; that doesn’t mean I am going to stare at a circle to try to find this data. (Perhaps I might be better off, the way things are going). Nor am I going to stare at Searle’s Wall to find the computation going on inside my own head, or yours.

They all produce the same pattern of pixels. That’s the point.

nitpick: It is not actually known whether the expansion of pi contains all finite sequences of digits.

It’s actually a bit of a hole in mathematics: the only numbers we know are normal are ones we’ve specifically picked for that purpose. We don’t have a good mechanism to tell whether a given irrational number is normal.

Sigh! I think we’re now talking past each other, but still, I forge ahead … :slight_smile:

ETA: I’ve included my original replies for the sake of clarity.

If a mere description of mapping inputs to outputs seems to trivialize the notion of what computing is, the fault is not in my explanation but in the triviality of your example. Computing can be more generally defined, for the purposes of this discussion, as the operation of a set of rules, embodied in the states of the system doing the computing, which transforms a set of input symbols into a set of output symbols. There is nothing “trivial” about this as that’s exactly what a Turing machine does. And again, if I want to infer what these rules or states are from the behavior of the system, any inference that correctly predicts the system’s behavior is exactly equivalent to any other. It just so happens that in real computing systems of realistic complexity, the existence of such multiple arbitrary interpretations becomes vanishingly improbable.

As for “identity physicalism”, if your intent is to disparage the mind-brain identity proposition, you may as well remove the “just” in front of it. This is, in my view, one of the central tenets of the computational theory of mind. It’s a feature, not a bug!

Let me explain what I meant by “richer semantics”. Instead of a box with switches as inputs and lights as outputs, you have a speech-to-text system. Its input is speech, in which it is able to distinguish individual words, understand their grammatical context, and resolve ambiguities due to homophones and the like based on understanding context, and produces flawless English text as output, complete with correct punctuation. Internally, this just a rules-based system operating on a large array of input symbols to produce a large array of output symbols. But good luck examining its symbol-transforming behavior and coming up with any interpretation other than English speech-to-text processing. And if you did, congratulations, but you’ve just described a system that is computationally exactly equivalent.

The utility of such a system, of course, relies on a user (an observer), and moreover, an observer who speaks English. But that in no way changes the objective nature of the computation itself, nor is it in any way relevant to its computational specification.

“Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.”

If we ignore consciousness for a minute and just think in terms of brain states, the point is that just like your box but on a larger scale there could be multiple environments in which the brain states evolve in exactly the same manner and successfully provide the correct responses for survival.

Let’s pretend there is an alternate world with two differences:
1 - Light from the sun has a different mix of intensities at different wavelengths so to us things look different
2 - The rods, cones and rgc’s in the alien all have shifted sensitivities so that the activation under various conditions matches our cells under comparable conditions (sunny day, cloudy day, etc.)

If we assume everything else about the environment and our alien is the same, then, despite differences in the external environment, the internal brain states would be the same.

Assuming you agree with the hypothetical, would there be any difference in the conscious experience? It seems like the answer must be no because there is no other signal available to create different conscious experience.

If you assume all elements of a hypothetical are identical then the elements are hypothetically identical. Or, at least they will be until you assume them to be otherwise.

It’s possible that identity physicalism or an epiphenomenalist notion describes the model I was picturing in my head. I was trying to start with the basics (e.g. functional mappings without names attached) and build from there.

Correct me if I’m wrong, but I don’t believe anyone on the planet has actually achieved anything with respect to consciousness other than exploring the pros and cons of lots of different angles (which is not to minimize the effort or intellect involved).

When you state it would overturn much of cognitive science, that sounds like something has actually been conclusively figured out.

That’s a misreading of what I was saying. What I’m saying is that the blanket rejection of the computation theory of mind being advanced by HMHW would overturn many widely accepted principles of cognition that form a major underpinning of cognitive science. As Fodor has said, CTM is a powerful explanatory device that should not, however, be taken as a complete explanation for all of cognition. Nowhere in this is there any implication that we have a functional understanding of the mechanisms of consciousness. That’s not what I was implying at all.

Maybe there is a terminology issue:
I believe HMHW was stating objections to the idea that computation can give rise to consciousness. You seem to object to that, but at the same time you agree that CTM hasn’t made any concrete progress in describing consciousness (nor has any other theory).

I don’t want to put words in HMHW’s mouth, but I don’t believe he was rejecting the idea that our brains probably perform symbolic processing in some cases, but rather that it’s problematic to try to describe how those same processes can give rise to consciousness.

This is just waffling on the notion of computation. The point still remains: my f and f’ are different computations (as again, otherwise, computation collapses to physical evolution, removing everything that makes computationalism a distinct philosophical position), and whether the system implements one or the other depends on how it is interpreted.

After all, the important question is merely: are you able to use my device to compute the sum of two inputs? I hope you’ll agree that the answer is yes. And furthermore, are you able to use my device to compute f’? And again, the answer is yes. So just as that, as actually computing stuff, it’s perfectly clear that the device can be interpreted as computing those functions. Any notion of computation that, for instance, claims that one doesn’t compute the sum of two numbers with it, is just a bad one, and really, can only be arrived at by highly motivated reasoning. The device computes the sum (and f’) in the same way your pocket calculator does, in the same way your PC does; and that way is all there is to computation.

You’re trying to throw more computation at this problem to make it go away, but this can only compound it. I’ve given the demonstration above: no matter what further computations you add, you run into the same problem, multiplied. The only thing you loose is clarity, which allows you to imagine that maybe, once you can’t really quite clearly conceive of everything the system does anymore, something’s just gonna happen to fix everything. But the above constitutes proof that this isn’t so. If you just pipe the output of one computation into another, that doesn’t do anything to fix its interpretation. Adding another won’t help, either. And so on. While it’s easy to eventually get to systems too unwieldy to clearly imagine, it follows from the simple example by induction that no further complifications are going to help, at all—indeed, they can only make matters worse, by introducing more degrees of interpretational freedom.

You consider this to be a trivial example, but that’s its main virtue: it shows the problem clearly, unlike the case where you just end up appealing to complexity, claiming that something we can’t think of will come along to fix everything.

But it’s completely clear from the example that any system that can be considered to implement one computation can, on equal grounds, be considered to implement many others. This is fatal to computationalism, and your refusal to engage the actual argument isn’t going to change that.

Anyway, there’s one way to demonstrate that you’re right: show which computation the device D implements without any need for interpretation.

Again, the exact opposite is the case. We can consider everything that a modern computer does as a chaining of sub-parts of the form of my device D, or even more simple elements, like individual NAND-gates. The number of distinct computations that the total system can be interpreted to perform is the product of the number of computations each of its sub-parts can be interpreted to perform. The more complex you make the system, the worse the problem gets, as there are more and more functions that the system can be taken to implement.

Then you either misunderstand computationalism or identity theory. Computationalism was developed as an elaboration on functionalism, which was proposed to counter an attack that (many think) dooms identity physicalism, namely, multiple realizability. A state of mind can’t be identical to a neuron firing pattern if the same mental state can be realized in a silicon brain, for example, since a silicon brain’s activation pattern and a neuron firing pattern are distinct objects. So you have a contradiction of the form A = B, A = C, but B != C.

To answer this objection, the idea was developed that mental states are identical not to physical, but to functional properties, and, on computationalism, particularly computational functional properties. If computationalism thus collapses onto identity physicalism—which it does, if you strip away the distinction between f and f’—computationalism fails to safe physicalism from the threat of multiple realizability.

In my experience, people who appeal to this quote typically just confuse the limits of their imagination with the limits of what’s possible.

Agreed. Likewise, one could consider my device to react to stimuli via light signals; the switches are a stand-in for a sensory apparatus, and what flips the switches is irrelevant.

But that’s not what I’m getting at. Rather, I want to point out that what computation we consider the system to perform is something over and above its stimulus-response behavior; it’s purely an interpretational gloss over its physical evolution. And that’s all that computation comes down to. As such, it can never provide the footing for mental capacities, as it is itself dependent on them.

What I’m objecting to is the notion that brains give rise to minds through implementing the right sort of computation—because without mind (or at least, interpretation), there is no fact of the matter regarding which computation any given physical system (including brains) implements.

HMHW, I’m curious about your position on this, same or different conscious experience?

Stepping back and looking at the problem in general, I have two conflicting thoughts:
1 - Every single theory proposed so far seems to have fatal flaws and that is after significant effort by really smart people. This kind of points towards the answer not being based on logic and math.

2 - But, there have been math problems that took centuries to solve, maybe this is one of them.

Before I respond more comprehensively to some of the other points you raise, I’m curious as to why you didn’t respond directly to my speech-to-text system example, as I think it directly contradicts this claim. On the contrary to what you state, more complex systems that have purposeful computations are more and more constrained to producing those outputs – and only those outputs – that serve the intended purpose. Let me re-iterate that here.

That it’s possible to have multiple interpretations of the results of your box with switches and lights and apparently impossible to have such multiple interpretations of the computations of an advanced speech-to-text system is absolutely NOT a matter of obfuscation or difficulty, it reflects a qualitative change where the property of the computation itself has become intrinsically fixed. And when I refer to the “system”, this must be taken to mean the system in its holistic entirety. It is absolutely irrelevant that you can play this game within individual low-level sub-components like logic gates or even small program modules, and then declare the entire thing to be therefore the product of a large number of arbitrary interpretations! As the complexity of a computing system grows, its qualitative attributes change in fundamental ways, and they can’t necessarily be simplistically inferred from its smaller components. This critical principle is embodied in concepts like synergy and emergent properties.

Incidentally, my interest in this matter is not abstract philosophical debate but support for CTM and its implication of multiple realizability and thus for the premise that most of (at this point I’ll settle for “most of” rather than “all of”) the functions of the human brain can be and will be implemented in digital computers. There are many theorists who not only claim that “all of” is appropriate, but that intelligent machines will exceed human capabilities in a general-purpose fashion. I see no reason to doubt them.

Two brains that don’t differ physically, don’t differ regarding the experience they produce (or at least, I have no reason to think they should, and lots of reasons to think the notion would be incoherent).

It introduces nothing new, quite simply. It’s possible to map each grammatically correct sentence in English to a grammitically correct sentence in another language having a different meaning, while keeping the relations between sentences intact (such as, which sentence would be a reasonable answer to what question). So somebody speaking that other language would converse with the English text-producing system in their language about something entirely different than an English-speaking person, while exchanging the same symbolic vehicles with it.

The reason that this is possible is that a (single-language) dictionary is just a set of relations; so ‘dog’ might get explained with terms like ‘furry’, ‘four-legged’, ‘animal’, and so on. So you merely need to map ‘dog’ to some other term, as well as the explanative terms, and the ones explaining those, and so on; this will keep the web of relations the same, while changing the referents of the individual words.

You can explicitly write this down for simple enough languages—indeed, that’s exactly what my example above amounts to. But moving to a more complicated language doesn’t add anything essentially new; the web of relations gets more complex, a larger graph, but what the nodes stand for still isn’t determined by the connections.

You appear to be coy about directly answering the challenge. Once again you’ve quoted a small part of what I said but not the meat of it.

To repeat and summarize, the challenge is that you claim that your trivially simple box example suffices as proof of the multiple-interpretations thesis, and that much more complex systems are “even worse” from my standpoint because they have, in effect, a very large number of such boxes, each of which is performing computations subject to equally arbitrary interpretations. I’m not saying that I’m right and you’re wrong on what may ultimately be an article of faith, but I am saying that this particular argument is deeply flawed.

Again, you ignore the very important point that great increases in complexity result in qualitative (not just quantitative) changes in the properties of computational systems. We call these qualitative changes things like synergy and emergent properties. It isn’t magic, though. What’s missing from your analysis is any acknowledgment of the tremendous computational implications of the interconnections and data flows between these components, none of which is apparent from any observation of the components themselves, but is only visible when viewing the behavior of the system as a whole. It is here that we observe non-trivial constraints on the range of interpretations of what the computing system is actually doing, to the point that the set of possible interpretations may equal exactly one.

I know I certainly don’t need to lecture you about the broader fallacies that arise from extending an understanding of trivial components to assumptions about much more complex computational systems, but I can’t help but reflect on how this is what led many to proclaim that “computers can’t really think” and “they can only do what they’re programmed to do”, and consequently led Hubert Dreyfus to claim that no computer would ever be able to play better than a child’s level of chess. This claim was put to rest when the PDP-10 MacHack program beat him badly way back in 1967*, and we all know the evolution of chess programs to grandmaster status today. Damn, those programmers must be good chess players! :smiley:


  • I had to look up the year because I’d forgotten. Which was when I discovered that Dreyfus had passed away in 2017. I guess I’ll have to stop saying nasty things about him now. Pity that he’ll never see the amazing things he claimed could never happen.