View Single Post
  #103  
Old 05-19-2019, 04:03 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by RaftPeople View Post
Thoughts:
Your box example (and my brain example) are at their cores just input to output mappings.
In the sense that any function is just a mapping from inputs to outputs (domain to codomain), including computable functions. So if you change the mapping, you change the function; if you change the function, you change the computation. Anything else (see below) just collapses to calling the sequence of states a system traverses a 'computation', but that's really not computationalism, that's just type-identity physicalism (the claim that a given neural firing pattern just is identical to a certain mental state).

Quote:
This is where I'm not sure about your conclusion. I believe your conclusion is that consciousness can't be said to be created by a specific computation because that very computation is also the computation used for function XYZ, just like you box computes multiple functions simultaneously because they share a mapping of input to output (if you map your problems input to the input and output to the output correctly).
Not quite. It's not about the fact that different computations can be implemented by the same system, it's that having a system implement any computation needs an external agent to interpret the syntactic vehicles the system manipulates, and that thus, any attempt to base mind on computation lapses into circularity.

Quote:
I believe this is the same argument you mentioned one other time that if we map inputs and outputs properly then a rock can perform any computation. But in reality the computation just got pushed to the input and output mapping.



So, in summary:
1 - The mapping of function input to the machinery's input and then machinery's output to function output has computation embedded in the mappings to input and output that are external to your machine. You would need to consider the entire system.
That won't work, though. It's true that you can take the inputs and outputs of the computation for binary addition, and apply some pre- and post-computation to them to obtain the value table for the function f' (in complexity science terms, you can perform a reduction of one function to the other), but neither does that make them the same function, nor does this actually solve the problem. Because you have to appeal to further computation to perform this reduction, and of course, that computation faces the same problem.

So, symbolically, if f is binary addition, and f' is the other function defined above, we can define two further computations I and O such that:

f'x = O*f*Ix
Where '*' denotes the composition operation. That is, you take your input vector x, translate it to an input vector Ix for f, apply f, and then translate the resulting output vector into the output f' would've given if applied to x.

You see, you haven't made any headway on the problem of implementation---much the opposite: where before, you couldn't decide between whether f or f' is the computation performed by D, now, you also have to decide whether a suitable system implements O and I!

But of course, for any system you claim does that, I can cook up an interpretation such that it computes some other function.

Quote:
Or if there were no mappings required, then we just happen to have given multiple names to the same function.
This strategy won't help, either, on the account that it trivializes the notion of computation, such that it just becomes co-extensive with the notion of physical evolution of a system (and thus, computationalism just collapses onto identity physicalism).

For what could the 'computation' be, such that it can equally well be regarded as f and f'? After all, both are completely different, if considered as (partial recursive) functions. Writing down algorithms computing either, they, likewise, would come out totally different. They're implemented by different Turing machines, and so on. On any of the usual notions of computation, thus, they'd come out squarely different, and similar only in as much as they have the same domain and codomain.

In fact, we're seeing an echo of Newman's famous objection, here: if we're willing to consider these two functions to be the same 'computation', then a 'computation' is just a specification of a domain and a codomain, as we can transform each of the functions defined over them into one another by means of an example such as the one I gave above. So 'computation' would simply be a specification of possible inputs and outputs, without any further notion of which inputs get mapped to what outputs---which of course goes contrary to every notion of what a computation is, as it's exactly which inputs map to what outputs that usually interests us.

But that's really getting ahead of ourselves a bit. To satisfy your contention, we'd have to find what's left over once we remove any mapping to inputs and outputs. What remains of the computation once we stipulate that f and f' should be 'the same'.

The answer is, of course, not much: just flipped switches and blinking lights. Because if we strip away what individuates the two computations, all that we're left with---all that we can be left with---is just the physical state of the system. But if that's the case, then what we call 'computation' is just the same as the system's physical evolution, i. e. the set of states it traverses.

Then, of course, nothing remains of computationalism (that distinguishes it from identity physicalism). Then, you'd have to say that a particular pattern of switches and lights is identical to a 'computation', and, by extension, a mental state.

So, if you want f and f' to just be 'different names for the same computation', computationalism looses everything that makes it a distinct theory of the mind, and collapses onto identity physicalism, whose central (and, IMO, untenable) claim is just this: that a given pattern of switches and lights just is a mental experience.

Quote:
2 - You state that the interpretation requires an external agent, but that is only to provide the additional computations embedded in the mappings into and out of your machine system. If we consider the entire system/function, is there still a need for an external agent?
We can imagine enhancing D with whatever it is that enables an agent to individuate the computation it performs to either f or f'. Say, we just tack on the relevant part of brain tissue (perhaps grown in the lab, to avoid problems with the ethics committee). Would we then have a device that implements a unique computation?

And of course, the answer is no: whatever that extra bit of brain tissue does, all I can know of it is some activation pattern; and all I can do with that is, again, interpret it in some way. And different interpretations will give rise to different computations.

It wouldn't even help to involve the whole agent in the computation. Because even if that were to give a definite computation as a result, the original conclusion would still hold: we need to appeal to an external agent to fix a computation, and hence, can't use computation to explain the agent's capabilities. But moreover, what would happen, in such a case? I give the agent inputs, perhaps printed on a card, and receive outputs likewise. The agent consults the device, interpreting its inputs and outputs for me. So now the thing just implements whatever function the agent takes it to implement, right?

But that's only true for the agent. Me, I send in symbols, and receive symbols; but it's not a given that I interpret them the same way the agent does. I give him a card onto which a circle is printed, that they interpret as '0', but by which I, using a different language or alphabet, meant '1'. So this strategy is twice hopeless.

Quote:
3 - Even if we consider the entire system, there are still common computations that can serve many different purposes. In a beetle there could be function X that takes 8 inputs and spits out 3 outputs that servers some larger process, and in a fish that exact same mapping could be applied in a different area of the brain serving a different larger purpose. Is it really a problem if the same conscious state can arise in many different environments (this is my alien purple world example). The beetle and the fish share some mappings, why is consciousness so special that the mappings can't be shared in different environments?
I still don't really get what your example has to do with mine. Do you want to say that what conscious state is created isn't relevant, as long as the behavior of the system fits? I. e. that no matter if I see a Tiger planning to jump, or hallucinate a bowl of icecream, I'll be fine as long as I duck?

If so, then again, what you're proposing isn't computationalism, but perhaps some variant of behaviorism or, again, identity physicalism; or maybe an epiphenomenalist notion, where consciousness isn't causally relevant to our behavior, but is just 'along for the ride'. Neither of them sits well with computationalist ideas---either, we again have a collapse of the notion of computation onto the mere behavior of a system, or what's being computed simply doesn't matter.

Quote:
Originally Posted by wolfpup View Post
Well, no, that conclusion is true only if you assume the need for the aforementioned agency, or interpreter, as a prerequisite for computation, a notion that I rejected from the beginning -- a notion that, if true, would undermine pretty much the whole of CTM and most of modern cognitive science along with it.
I don't assume the interpreter, I pointed out that without one, there's just no fact of the matter regarding what computation a system implements. The argument I gave, if correct, derives the necessity of interpretation in order to associate any given computation to a physical system.

Also, none of this threatens the possibility or utility of computational modeling. This is again just confusing the map for the territory. That you can use an orrery to model the solar system doesn't in any way either imply or require that the solar system is made of wires and gears, and likewise, that you can model (aspects of) the brain computationally doesn't imply that the brain is a computer.

Quote:
So the "computation" it's doing is accurately described either by your first account (binary addition) or the second one, or any other that is consistent with the same switch and light patterns. It makes no difference. They are all exactly equivalent.
I've pointed out above why this reply can't work. Not only does it do violence to any notion of computation currently in use, trivializing it to merely stating input- and output-sets, but moreover, the 'computationalism' arrived at in this fashion is just identity physicalism.

So if you're willing to go that far to 'defend' computationalism, you end up losing what makes it distinct as a theory of the mind.

Quote:
The fact remains that the fundamental thing that the box is doing doesn't require an observer to interpret, and neither does any computational system. The difference with real computational systems, including the brain, is that there is a very rich set of semantics associated with their inputs and outputs which makes it essentially impossible to play the little game of switcheroo that you were engaging in.
To the contrary---a richer set of behavior makes these games even easier, and the resulting multiplicity of computations associated to a system (combinatorially) larger. The appeal to semantics here is, by the way, fallacious, since what I'm pointing out is exactly that there is no unique semantics attached to any set of symbols and their relations.

Quote:
FTR, I don't claim to have solved the problem of consciousness. However, as you well know, emergent properties are a real thing, and if one is hesitant to say "that's why we have consciousness", we can at least say that emergent properties are a very good candidate explanation of attributes like that which appear to exist on a continuum in different intelligent species to an extent that empirically appears related to the level intelligence. They are a particularly good candidate in view of the fact that there is not even remotely any other plausible explanation, other than "mystical soul" or "magic".
Emergence is only a contentful notion if you have some candidate properties capable of supporting the emergent properties. Otherwise, you're appealing to magic---something unknown will do we don't know what, and poof, consciousness. That's not a theory, that's a statement of faith. But we've been down this road before, I think.

Last edited by Half Man Half Wit; 05-19-2019 at 04:04 AM.