Reply
 
Thread Tools Display Modes
  #101  
Old 05-18-2019, 04:08 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Quote:
Originally Posted by RaftPeople View Post
Then how did you link your functional description of cognition to what is NOT going on inside the biology previously (not visual area, no analog signals)? You seem to want to make statements sometimes about what is going on in the biology, and then at other times you want to state that it's irrelevant.

Help me understand your position, when is the biology relevant for understanding human cognition and when is it not relevant?

Isn't it putting the cart before the horse to decide in advance that the machinery is doing X without understanding how the machine works?

How can CTM ever make a prediction about the system if you can't ground it in reality?

What predictions does CTM make that can be used to determine if mental imagery is symbolic or not?

Again, how can you have a theory that doesn't even have a concrete understanding of what is a symbol and what isn't a symbol?
I don't want to get sidetracked into a discussion of mental imagery here and, furthermore, I'm pretty sure we've been through it before and I'm not interested in doing it again.

CTM is absolutely grounded in reality based on extensive experimental results. You can't make blanket statements like "understanding how the machine works" when there is a very deep hierarchy of different levels of implementation.

Regarding CTM and mental imagery, CTM makes a whole host of predictions that have turned out to be accurate, such as with perceptions of optical illusions, or rotating objects in mental images, all of which provide evidence that the image is being reconstructed in the manner of a computer rather than "projected" and viewed by an internal mind's eye.

I gave you a good definition of what is meant by a "symbol" in the context of computational theory and even a quote from Fodor's book on the subject.
  #102  
Old 05-18-2019, 05:22 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Quote:
Originally Posted by RaftPeople View Post
Your box example (and my brain example) are at their cores just input to output mappings. We want to attach names to the set of mappings (e.g. binary addition) which introduces the issue of an external agent being required to choose the specific name for what is being computed.
I didn't see this before, but I agree with it. Basically you've come to the same conclusion that I expressed in #98, that both interpretations of what the box is doing are exactly equivalent. The computation that the box is doing is objectively defined by the logic and wiring inside it. I like the perspective that the only thing the external observer is doing is attributing a name to it.

I forgot to mention in the first sentence of that post, for the benefit of other readers, that HMHW was basing his no-CTM argument on the homunculus fallacy, the position being that if a computational system requires an interpreting observer to be meaningful, then if the mind was computational it would need such an observer, too, and so would that observer, and so on in an infinite regression. But this is not an issue since the premise is false, which is a good thing, because otherwise it would overturn much of cognitive science.
  #103  
Old 05-19-2019, 05:03 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by RaftPeople View Post
Thoughts:
Your box example (and my brain example) are at their cores just input to output mappings.
In the sense that any function is just a mapping from inputs to outputs (domain to codomain), including computable functions. So if you change the mapping, you change the function; if you change the function, you change the computation. Anything else (see below) just collapses to calling the sequence of states a system traverses a 'computation', but that's really not computationalism, that's just type-identity physicalism (the claim that a given neural firing pattern just is identical to a certain mental state).

Quote:
This is where I'm not sure about your conclusion. I believe your conclusion is that consciousness can't be said to be created by a specific computation because that very computation is also the computation used for function XYZ, just like you box computes multiple functions simultaneously because they share a mapping of input to output (if you map your problems input to the input and output to the output correctly).
Not quite. It's not about the fact that different computations can be implemented by the same system, it's that having a system implement any computation needs an external agent to interpret the syntactic vehicles the system manipulates, and that thus, any attempt to base mind on computation lapses into circularity.

Quote:
I believe this is the same argument you mentioned one other time that if we map inputs and outputs properly then a rock can perform any computation. But in reality the computation just got pushed to the input and output mapping.



So, in summary:
1 - The mapping of function input to the machinery's input and then machinery's output to function output has computation embedded in the mappings to input and output that are external to your machine. You would need to consider the entire system.
That won't work, though. It's true that you can take the inputs and outputs of the computation for binary addition, and apply some pre- and post-computation to them to obtain the value table for the function f' (in complexity science terms, you can perform a reduction of one function to the other), but neither does that make them the same function, nor does this actually solve the problem. Because you have to appeal to further computation to perform this reduction, and of course, that computation faces the same problem.

So, symbolically, if f is binary addition, and f' is the other function defined above, we can define two further computations I and O such that:

f'x = O*f*Ix
Where '*' denotes the composition operation. That is, you take your input vector x, translate it to an input vector Ix for f, apply f, and then translate the resulting output vector into the output f' would've given if applied to x.

You see, you haven't made any headway on the problem of implementation---much the opposite: where before, you couldn't decide between whether f or f' is the computation performed by D, now, you also have to decide whether a suitable system implements O and I!

But of course, for any system you claim does that, I can cook up an interpretation such that it computes some other function.

Quote:
Or if there were no mappings required, then we just happen to have given multiple names to the same function.
This strategy won't help, either, on the account that it trivializes the notion of computation, such that it just becomes co-extensive with the notion of physical evolution of a system (and thus, computationalism just collapses onto identity physicalism).

For what could the 'computation' be, such that it can equally well be regarded as f and f'? After all, both are completely different, if considered as (partial recursive) functions. Writing down algorithms computing either, they, likewise, would come out totally different. They're implemented by different Turing machines, and so on. On any of the usual notions of computation, thus, they'd come out squarely different, and similar only in as much as they have the same domain and codomain.

In fact, we're seeing an echo of Newman's famous objection, here: if we're willing to consider these two functions to be the same 'computation', then a 'computation' is just a specification of a domain and a codomain, as we can transform each of the functions defined over them into one another by means of an example such as the one I gave above. So 'computation' would simply be a specification of possible inputs and outputs, without any further notion of which inputs get mapped to what outputs---which of course goes contrary to every notion of what a computation is, as it's exactly which inputs map to what outputs that usually interests us.

But that's really getting ahead of ourselves a bit. To satisfy your contention, we'd have to find what's left over once we remove any mapping to inputs and outputs. What remains of the computation once we stipulate that f and f' should be 'the same'.

The answer is, of course, not much: just flipped switches and blinking lights. Because if we strip away what individuates the two computations, all that we're left with---all that we can be left with---is just the physical state of the system. But if that's the case, then what we call 'computation' is just the same as the system's physical evolution, i. e. the set of states it traverses.

Then, of course, nothing remains of computationalism (that distinguishes it from identity physicalism). Then, you'd have to say that a particular pattern of switches and lights is identical to a 'computation', and, by extension, a mental state.

So, if you want f and f' to just be 'different names for the same computation', computationalism looses everything that makes it a distinct theory of the mind, and collapses onto identity physicalism, whose central (and, IMO, untenable) claim is just this: that a given pattern of switches and lights just is a mental experience.

Quote:
2 - You state that the interpretation requires an external agent, but that is only to provide the additional computations embedded in the mappings into and out of your machine system. If we consider the entire system/function, is there still a need for an external agent?
We can imagine enhancing D with whatever it is that enables an agent to individuate the computation it performs to either f or f'. Say, we just tack on the relevant part of brain tissue (perhaps grown in the lab, to avoid problems with the ethics committee). Would we then have a device that implements a unique computation?

And of course, the answer is no: whatever that extra bit of brain tissue does, all I can know of it is some activation pattern; and all I can do with that is, again, interpret it in some way. And different interpretations will give rise to different computations.

It wouldn't even help to involve the whole agent in the computation. Because even if that were to give a definite computation as a result, the original conclusion would still hold: we need to appeal to an external agent to fix a computation, and hence, can't use computation to explain the agent's capabilities. But moreover, what would happen, in such a case? I give the agent inputs, perhaps printed on a card, and receive outputs likewise. The agent consults the device, interpreting its inputs and outputs for me. So now the thing just implements whatever function the agent takes it to implement, right?

But that's only true for the agent. Me, I send in symbols, and receive symbols; but it's not a given that I interpret them the same way the agent does. I give him a card onto which a circle is printed, that they interpret as '0', but by which I, using a different language or alphabet, meant '1'. So this strategy is twice hopeless.

Quote:
3 - Even if we consider the entire system, there are still common computations that can serve many different purposes. In a beetle there could be function X that takes 8 inputs and spits out 3 outputs that servers some larger process, and in a fish that exact same mapping could be applied in a different area of the brain serving a different larger purpose. Is it really a problem if the same conscious state can arise in many different environments (this is my alien purple world example). The beetle and the fish share some mappings, why is consciousness so special that the mappings can't be shared in different environments?
I still don't really get what your example has to do with mine. Do you want to say that what conscious state is created isn't relevant, as long as the behavior of the system fits? I. e. that no matter if I see a Tiger planning to jump, or hallucinate a bowl of icecream, I'll be fine as long as I duck?

If so, then again, what you're proposing isn't computationalism, but perhaps some variant of behaviorism or, again, identity physicalism; or maybe an epiphenomenalist notion, where consciousness isn't causally relevant to our behavior, but is just 'along for the ride'. Neither of them sits well with computationalist ideas---either, we again have a collapse of the notion of computation onto the mere behavior of a system, or what's being computed simply doesn't matter.

Quote:
Originally Posted by wolfpup View Post
Well, no, that conclusion is true only if you assume the need for the aforementioned agency, or interpreter, as a prerequisite for computation, a notion that I rejected from the beginning -- a notion that, if true, would undermine pretty much the whole of CTM and most of modern cognitive science along with it.
I don't assume the interpreter, I pointed out that without one, there's just no fact of the matter regarding what computation a system implements. The argument I gave, if correct, derives the necessity of interpretation in order to associate any given computation to a physical system.

Also, none of this threatens the possibility or utility of computational modeling. This is again just confusing the map for the territory. That you can use an orrery to model the solar system doesn't in any way either imply or require that the solar system is made of wires and gears, and likewise, that you can model (aspects of) the brain computationally doesn't imply that the brain is a computer.

Quote:
So the "computation" it's doing is accurately described either by your first account (binary addition) or the second one, or any other that is consistent with the same switch and light patterns. It makes no difference. They are all exactly equivalent.
I've pointed out above why this reply can't work. Not only does it do violence to any notion of computation currently in use, trivializing it to merely stating input- and output-sets, but moreover, the 'computationalism' arrived at in this fashion is just identity physicalism.

So if you're willing to go that far to 'defend' computationalism, you end up losing what makes it distinct as a theory of the mind.

Quote:
The fact remains that the fundamental thing that the box is doing doesn't require an observer to interpret, and neither does any computational system. The difference with real computational systems, including the brain, is that there is a very rich set of semantics associated with their inputs and outputs which makes it essentially impossible to play the little game of switcheroo that you were engaging in.
To the contrary---a richer set of behavior makes these games even easier, and the resulting multiplicity of computations associated to a system (combinatorially) larger. The appeal to semantics here is, by the way, fallacious, since what I'm pointing out is exactly that there is no unique semantics attached to any set of symbols and their relations.

Quote:
FTR, I don't claim to have solved the problem of consciousness. However, as you well know, emergent properties are a real thing, and if one is hesitant to say "that's why we have consciousness", we can at least say that emergent properties are a very good candidate explanation of attributes like that which appear to exist on a continuum in different intelligent species to an extent that empirically appears related to the level intelligence. They are a particularly good candidate in view of the fact that there is not even remotely any other plausible explanation, other than "mystical soul" or "magic".
Emergence is only a contentful notion if you have some candidate properties capable of supporting the emergent properties. Otherwise, you're appealing to magic---something unknown will do we don't know what, and poof, consciousness. That's not a theory, that's a statement of faith. But we've been down this road before, I think.

Last edited by Half Man Half Wit; 05-19-2019 at 05:04 AM.
  #104  
Old 05-19-2019, 05:49 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,885
Quote:
I don't assume the interpreter, I pointed out that without one, there's just no fact of the matter regarding what computation a system implements.
.Nope. There may be an infinite potential computations going on in my laptop, but only one is ontologically privileged; the one which produces the pattern of pixels on the screen. All the others are just garbage. You don't need an 'interpreter' to find out what the outputs of my laptop are - a good camera will do.

All the other hypothetical computations are just noise, nonsense. Hidden in the digits of pi are digits that describe exactly the content of next week's Game of Thrones; that doesn't mean I am going to stare at a circle to try to find this data. (Perhaps I might be better off, the way things are going). Nor am I going to stare at Searle's Wall to find the computation going on inside my own head, or yours.
  #105  
Old 05-19-2019, 05:59 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by eburacum45 View Post
.Nope. There may be an infinite potential computations going on in my laptop, but only one is ontologically privileged; the one which produces the pattern of pixels on the screen.
They all produce the same pattern of pixels. That's the point.
  #106  
Old 05-19-2019, 06:36 AM
Mijin's Avatar
Mijin is offline
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 9,119
Quote:
Originally Posted by eburacum45 View Post
Hidden in the digits of pi are digits that describe exactly the content of next week's Game of Thrones; that doesn't mean I am going to stare at a circle to try to find this data.
nitpick: It is not actually known whether the expansion of pi contains all finite sequences of digits.

It's actually a bit of a hole in mathematics: the only numbers we know are normal are ones we've specifically picked for that purpose. We don't have a good mechanism to tell whether a given irrational number is normal.
  #107  
Old 05-19-2019, 10:01 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Sigh! I think we're now talking past each other, but still, I forge ahead ...

ETA: I've included my original replies for the sake of clarity.

Quote:
Originally Posted by Half Man Half Wit View Post
Quote:
So the "computation" it's doing is accurately described either by your first account (binary addition) or the second one, or any other that is consistent with the same switch and light patterns. It makes no difference. They are all exactly equivalent.
I've pointed out above why this reply can't work. Not only does it do violence to any notion of computation currently in use, trivializing it to merely stating input- and output-sets, but moreover, the 'computationalism' arrived at in this fashion is just identity physicalism.

So if you're willing to go that far to 'defend' computationalism, you end up losing what makes it distinct as a theory of the mind.
If a mere description of mapping inputs to outputs seems to trivialize the notion of what computing is, the fault is not in my explanation but in the triviality of your example. Computing can be more generally defined, for the purposes of this discussion, as the operation of a set of rules, embodied in the states of the system doing the computing, which transforms a set of input symbols into a set of output symbols. There is nothing "trivial" about this as that's exactly what a Turing machine does. And again, if I want to infer what these rules or states are from the behavior of the system, any inference that correctly predicts the system's behavior is exactly equivalent to any other. It just so happens that in real computing systems of realistic complexity, the existence of such multiple arbitrary interpretations becomes vanishingly improbable.

As for "identity physicalism", if your intent is to disparage the mind-brain identity proposition, you may as well remove the "just" in front of it. This is, in my view, one of the central tenets of the computational theory of mind. It's a feature, not a bug!

Quote:
Originally Posted by Half Man Half Wit View Post
Quote:
The fact remains that the fundamental thing that the box is doing doesn't require an observer to interpret, and neither does any computational system. The difference with real computational systems, including the brain, is that there is a very rich set of semantics associated with their inputs and outputs which makes it essentially impossible to play the little game of switcheroo that you were engaging in.
To the contrary---a richer set of behavior makes these games even easier, and the resulting multiplicity of computations associated to a system (combinatorially) larger. The appeal to semantics here is, by the way, fallacious, since what I'm pointing out is exactly that there is no unique semantics attached to any set of symbols and their relations.
Let me explain what I meant by "richer semantics". Instead of a box with switches as inputs and lights as outputs, you have a speech-to-text system. Its input is speech, in which it is able to distinguish individual words, understand their grammatical context, and resolve ambiguities due to homophones and the like based on understanding context, and produces flawless English text as output, complete with correct punctuation. Internally, this just a rules-based system operating on a large array of input symbols to produce a large array of output symbols. But good luck examining its symbol-transforming behavior and coming up with any interpretation other than English speech-to-text processing. And if you did, congratulations, but you've just described a system that is computationally exactly equivalent.

The utility of such a system, of course, relies on a user (an observer), and moreover, an observer who speaks English. But that in no way changes the objective nature of the computation itself, nor is it in any way relevant to its computational specification.

Quote:
Originally Posted by Half Man Half Wit View Post
Quote:
FTR, I don't claim to have solved the problem of consciousness. However, as you well know, emergent properties are a real thing, and if one is hesitant to say "that's why we have consciousness", we can at least say that emergent properties are a very good candidate explanation of attributes like that which appear to exist on a continuum in different intelligent species to an extent that empirically appears related to the level intelligence. They are a particularly good candidate in view of the fact that there is not even remotely any other plausible explanation, other than "mystical soul" or "magic".
Emergence is only a contentful notion if you have some candidate properties capable of supporting the emergent properties. Otherwise, you're appealing to magic---something unknown will do we don't know what, and poof, consciousness. That's not a theory, that's a statement of faith. But we've been down this road before, I think.
"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth."

Last edited by wolfpup; 05-19-2019 at 10:04 AM.
  #108  
Old 05-19-2019, 11:29 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,742
Quote:
Originally Posted by Half Man Half Wit View Post
I still don't really get what your example has to do with mine. Do you want to say that what conscious state is created isn't relevant, as long as the behavior of the system fits? I. e. that no matter if I see a Tiger planning to jump, or hallucinate a bowl of icecream, I'll be fine as long as I duck?

If so, then again, what you're proposing isn't computationalism, but perhaps some variant of behaviorism or, again, identity physicalism; or maybe an epiphenomenalist notion, where consciousness isn't causally relevant to our behavior, but is just 'along for the ride'. Neither of them sits well with computationalist ideas---either, we again have a collapse of the notion of computation onto the mere behavior of a system, or what's being computed simply doesn't matter.
If we ignore consciousness for a minute and just think in terms of brain states, the point is that just like your box but on a larger scale there could be multiple environments in which the brain states evolve in exactly the same manner and successfully provide the correct responses for survival.

Let's pretend there is an alternate world with two differences:
1 - Light from the sun has a different mix of intensities at different wavelengths so to us things look different
2 - The rods, cones and rgc's in the alien all have shifted sensitivities so that the activation under various conditions matches our cells under comparable conditions (sunny day, cloudy day, etc.)

If we assume everything else about the environment and our alien is the same, then, despite differences in the external environment, the internal brain states would be the same.

Assuming you agree with the hypothetical, would there be any difference in the conscious experience? It seems like the answer must be no because there is no other signal available to create different conscious experience.
  #109  
Old 05-19-2019, 11:36 AM
Triskadecamus is offline
Charter Member
 
Join Date: Oct 1999
Location: I'm coming back, now.
Posts: 7,598
If you assume all elements of a hypothetical are identical then the elements are hypothetically identical. Or, at least they will be until you assume them to be otherwise.
  #110  
Old 05-19-2019, 11:40 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,742
Quote:
Originally Posted by Half Man Half Wit View Post
If so, then again, what you're proposing isn't computationalism, but perhaps some variant of behaviorism or, again, identity physicalism; or maybe an epiphenomenalist notion, where consciousness isn't causally relevant to our behavior, but is just 'along for the ride'. Neither of them sits well with computationalist ideas---either, we again have a collapse of the notion of computation onto the mere behavior of a system, or what's being computed simply doesn't matter.
It's possible that identity physicalism or an epiphenomenalist notion describes the model I was picturing in my head. I was trying to start with the basics (e.g. functional mappings without names attached) and build from there.
  #111  
Old 05-19-2019, 11:50 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,742
Quote:
Originally Posted by wolfpup View Post
But this is not an issue since the premise is false, which is a good thing, because otherwise it would overturn much of cognitive science.
Correct me if I'm wrong, but I don't believe anyone on the planet has actually achieved anything with respect to consciousness other than exploring the pros and cons of lots of different angles (which is not to minimize the effort or intellect involved).

When you state it would overturn much of cognitive science, that sounds like something has actually been conclusively figured out.

Last edited by RaftPeople; 05-19-2019 at 11:51 AM.
  #112  
Old 05-19-2019, 12:03 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Quote:
Originally Posted by RaftPeople View Post
Correct me if I'm wrong, but I don't believe anyone on the planet has actually achieved anything with respect to consciousness other than exploring the pros and cons of lots of different angles (which is not to minimize the effort or intellect involved).

When you state it would overturn much of cognitive science, that sounds like something has actually been conclusively figured out.
That's a misreading of what I was saying. What I'm saying is that the blanket rejection of the computation theory of mind being advanced by HMHW would overturn many widely accepted principles of cognition that form a major underpinning of cognitive science. As Fodor has said, CTM is a powerful explanatory device that should not, however, be taken as a complete explanation for all of cognition. Nowhere in this is there any implication that we have a functional understanding of the mechanisms of consciousness. That's not what I was implying at all.
  #113  
Old 05-19-2019, 12:26 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,742
Quote:
Originally Posted by wolfpup View Post
That's a misreading of what I was saying. What I'm saying is that the blanket rejection of the computation theory of mind being advanced by HMHW would overturn many widely accepted principles of cognition that form a major underpinning of cognitive science. As Fodor has said, CTM is a powerful explanatory device that should not, however, be taken as a complete explanation for all of cognition. Nowhere in this is there any implication that we have a functional understanding of the mechanisms of consciousness. That's not what I was implying at all.
Maybe there is a terminology issue:
I believe HMHW was stating objections to the idea that computation can give rise to consciousness. You seem to object to that, but at the same time you agree that CTM hasn't made any concrete progress in describing consciousness (nor has any other theory).

I don't want to put words in HMHW's mouth, but I don't believe he was rejecting the idea that our brains probably perform symbolic processing in some cases, but rather that it's problematic to try to describe how those same processes can give rise to consciousness.
  #114  
Old 05-19-2019, 12:28 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by wolfpup View Post
If a mere description of mapping inputs to outputs seems to trivialize the notion of what computing is, the fault is not in my explanation but in the triviality of your example. Computing can be more generally defined, for the purposes of this discussion, as the operation of a set of rules, embodied in the states of the system doing the computing, which transforms a set of input symbols into a set of output symbols.
This is just waffling on the notion of computation. The point still remains: my f and f' are different computations (as again, otherwise, computation collapses to physical evolution, removing everything that makes computationalism a distinct philosophical position), and whether the system implements one or the other depends on how it is interpreted.

After all, the important question is merely: are you able to use my device to compute the sum of two inputs? I hope you'll agree that the answer is yes. And furthermore, are you able to use my device to compute f'? And again, the answer is yes. So just as that, as actually computing stuff, it's perfectly clear that the device can be interpreted as computing those functions. Any notion of computation that, for instance, claims that one doesn't compute the sum of two numbers with it, is just a bad one, and really, can only be arrived at by highly motivated reasoning. The device computes the sum (and f') in the same way your pocket calculator does, in the same way your PC does; and that way is all there is to computation.

You're trying to throw more computation at this problem to make it go away, but this can only compound it. I've given the demonstration above: no matter what further computations you add, you run into the same problem, multiplied. The only thing you loose is clarity, which allows you to imagine that maybe, once you can't really quite clearly conceive of everything the system does anymore, something's just gonna happen to fix everything. But the above constitutes proof that this isn't so. If you just pipe the output of one computation into another, that doesn't do anything to fix its interpretation. Adding another won't help, either. And so on. While it's easy to eventually get to systems too unwieldy to clearly imagine, it follows from the simple example by induction that no further complifications are going to help, at all---indeed, they can only make matters worse, by introducing more degrees of interpretational freedom.

You consider this to be a trivial example, but that's its main virtue: it shows the problem clearly, unlike the case where you just end up appealing to complexity, claiming that something we can't think of will come along to fix everything.

But it's completely clear from the example that any system that can be considered to implement one computation can, on equal grounds, be considered to implement many others. This is fatal to computationalism, and your refusal to engage the actual argument isn't going to change that.

Anyway, there's one way to demonstrate that you're right: show which computation the device D implements without any need for interpretation.

Quote:
It just so happens that in real computing systems of realistic complexity, the existence of such multiple arbitrary interpretations becomes vanishingly improbable.
Again, the exact opposite is the case. We can consider everything that a modern computer does as a chaining of sub-parts of the form of my device D, or even more simple elements, like individual NAND-gates. The number of distinct computations that the total system can be interpreted to perform is the product of the number of computations each of its sub-parts can be interpreted to perform. The more complex you make the system, the worse the problem gets, as there are more and more functions that the system can be taken to implement.

Quote:
As for "identity physicalism", if your intent is to disparage the mind-brain identity proposition, you may as well remove the "just" in front of it. This is, in my view, one of the central tenets of the computational theory of mind. It's a feature, not a bug!
Then you either misunderstand computationalism or identity theory. Computationalism was developed as an elaboration on functionalism, which was proposed to counter an attack that (many think) dooms identity physicalism, namely, multiple realizability. A state of mind can't be identical to a neuron firing pattern if the same mental state can be realized in a silicon brain, for example, since a silicon brain's activation pattern and a neuron firing pattern are distinct objects. So you have a contradiction of the form A = B, A = C, but B != C.

To answer this objection, the idea was developed that mental states are identical not to physical, but to functional properties, and, on computationalism, particularly computational functional properties. If computationalism thus collapses onto identity physicalism---which it does, if you strip away the distinction between f and f'---computationalism fails to safe physicalism from the threat of multiple realizability.

Quote:
"Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth."
In my experience, people who appeal to this quote typically just confuse the limits of their imagination with the limits of what's possible.

Quote:
Originally Posted by RaftPeople View Post
If we ignore consciousness for a minute and just think in terms of brain states, the point is that just like your box but on a larger scale there could be multiple environments in which the brain states evolve in exactly the same manner and successfully provide the correct responses for survival.
Agreed. Likewise, one could consider my device to react to stimuli via light signals; the switches are a stand-in for a sensory apparatus, and what flips the switches is irrelevant.

But that's not what I'm getting at. Rather, I want to point out that what computation we consider the system to perform is something over and above its stimulus-response behavior; it's purely an interpretational gloss over its physical evolution. And that's all that computation comes down to. As such, it can never provide the footing for mental capacities, as it is itself dependent on them.
  #115  
Old 05-19-2019, 12:30 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by RaftPeople View Post
I don't want to put words in HMHW's mouth, but I don't believe he was rejecting the idea that our brains probably perform symbolic processing in some cases, but rather that it's problematic to try to describe how those same processes can give rise to consciousness.
What I'm objecting to is the notion that brains give rise to minds through implementing the right sort of computation---because without mind (or at least, interpretation), there is no fact of the matter regarding which computation any given physical system (including brains) implements.
  #116  
Old 05-19-2019, 01:15 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,742
Quote:
Originally Posted by RaftPeople View Post
If we ignore consciousness for a minute and just think in terms of brain states, the point is that just like your box but on a larger scale there could be multiple environments in which the brain states evolve in exactly the same manner and successfully provide the correct responses for survival.

Let's pretend there is an alternate world with two differences:
1 - Light from the sun has a different mix of intensities at different wavelengths so to us things look different
2 - The rods, cones and rgc's in the alien all have shifted sensitivities so that the activation under various conditions matches our cells under comparable conditions (sunny day, cloudy day, etc.)

If we assume everything else about the environment and our alien is the same, then, despite differences in the external environment, the internal brain states would be the same.

Assuming you agree with the hypothetical, would there be any difference in the conscious experience? It seems like the answer must be no because there is no other signal available to create different conscious experience.
HMHW, I'm curious about your position on this, same or different conscious experience?
  #117  
Old 05-19-2019, 01:21 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,742
Stepping back and looking at the problem in general, I have two conflicting thoughts:
1 - Every single theory proposed so far seems to have fatal flaws and that is after significant effort by really smart people. This kind of points towards the answer not being based on logic and math.

2 - But, there have been math problems that took centuries to solve, maybe this is one of them.
  #118  
Old 05-19-2019, 01:28 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Quote:
Originally Posted by Half Man Half Wit View Post
... Again, the exact opposite is the case. We can consider everything that a modern computer does as a chaining of sub-parts of the form of my device D, or even more simple elements, like individual NAND-gates. The number of distinct computations that the total system can be interpreted to perform is the product of the number of computations each of its sub-parts can be interpreted to perform. The more complex you make the system, the worse the problem gets, as there are more and more functions that the system can be taken to implement.
Before I respond more comprehensively to some of the other points you raise, I'm curious as to why you didn't respond directly to my speech-to-text system example, as I think it directly contradicts this claim. On the contrary to what you state, more complex systems that have purposeful computations are more and more constrained to producing those outputs -- and only those outputs -- that serve the intended purpose. Let me re-iterate that here.

That it's possible to have multiple interpretations of the results of your box with switches and lights and apparently impossible to have such multiple interpretations of the computations of an advanced speech-to-text system is absolutely NOT a matter of obfuscation or difficulty, it reflects a qualitative change where the property of the computation itself has become intrinsically fixed. And when I refer to the "system", this must be taken to mean the system in its holistic entirety. It is absolutely irrelevant that you can play this game within individual low-level sub-components like logic gates or even small program modules, and then declare the entire thing to be therefore the product of a large number of arbitrary interpretations! As the complexity of a computing system grows, its qualitative attributes change in fundamental ways, and they can't necessarily be simplistically inferred from its smaller components. This critical principle is embodied in concepts like synergy and emergent properties.

Incidentally, my interest in this matter is not abstract philosophical debate but support for CTM and its implication of multiple realizability and thus for the premise that most of (at this point I'll settle for "most of" rather than "all of") the functions of the human brain can be and will be implemented in digital computers. There are many theorists who not only claim that "all of" is appropriate, but that intelligent machines will exceed human capabilities in a general-purpose fashion. I see no reason to doubt them.
  #119  
Old 05-19-2019, 01:46 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by RaftPeople View Post
HMHW, I'm curious about your position on this, same or different conscious experience?
Two brains that don't differ physically, don't differ regarding the experience they produce (or at least, I have no reason to think they should, and lots of reasons to think the notion would be incoherent).

Quote:
Originally Posted by wolfpup View Post
Before I respond more comprehensively to some of the other points you raise, I'm curious as to why you didn't respond directly to my speech-to-text system example, as I think it directly contradicts this claim.
It introduces nothing new, quite simply. It's possible to map each grammatically correct sentence in English to a grammitically correct sentence in another language having a different meaning, while keeping the relations between sentences intact (such as, which sentence would be a reasonable answer to what question). So somebody speaking that other language would converse with the English text-producing system in their language about something entirely different than an English-speaking person, while exchanging the same symbolic vehicles with it.

The reason that this is possible is that a (single-language) dictionary is just a set of relations; so 'dog' might get explained with terms like 'furry', 'four-legged', 'animal', and so on. So you merely need to map 'dog' to some other term, as well as the explanative terms, and the ones explaining those, and so on; this will keep the web of relations the same, while changing the referents of the individual words.

You can explicitly write this down for simple enough languages---indeed, that's exactly what my example above amounts to. But moving to a more complicated language doesn't add anything essentially new; the web of relations gets more complex, a larger graph, but what the nodes stand for still isn't determined by the connections.
  #120  
Old 05-19-2019, 03:19 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Quote:
Originally Posted by Half Man Half Wit View Post
It introduces nothing new, quite simply.
You appear to be coy about directly answering the challenge. Once again you've quoted a small part of what I said but not the meat of it.

To repeat and summarize, the challenge is that you claim that your trivially simple box example suffices as proof of the multiple-interpretations thesis, and that much more complex systems are "even worse" from my standpoint because they have, in effect, a very large number of such boxes, each of which is performing computations subject to equally arbitrary interpretations. I'm not saying that I'm right and you're wrong on what may ultimately be an article of faith, but I am saying that this particular argument is deeply flawed.

Again, you ignore the very important point that great increases in complexity result in qualitative (not just quantitative) changes in the properties of computational systems. We call these qualitative changes things like synergy and emergent properties. It isn't magic, though. What's missing from your analysis is any acknowledgment of the tremendous computational implications of the interconnections and data flows between these components, none of which is apparent from any observation of the components themselves, but is only visible when viewing the behavior of the system as a whole. It is here that we observe non-trivial constraints on the range of interpretations of what the computing system is actually doing, to the point that the set of possible interpretations may equal exactly one.

I know I certainly don't need to lecture you about the broader fallacies that arise from extending an understanding of trivial components to assumptions about much more complex computational systems, but I can't help but reflect on how this is what led many to proclaim that "computers can't really think" and "they can only do what they're programmed to do", and consequently led Hubert Dreyfus to claim that no computer would ever be able to play better than a child's level of chess. This claim was put to rest when the PDP-10 MacHack program beat him badly way back in 1967*, and we all know the evolution of chess programs to grandmaster status today. Damn, those programmers must be good chess players!

--------

* I had to look up the year because I'd forgotten. Which was when I discovered that Dreyfus had passed away in 2017. I guess I'll have to stop saying nasty things about him now. Pity that he'll never see the amazing things he claimed could never happen.
  #121  
Old 05-19-2019, 04:19 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by wolfpup View Post
You appear to be coy about directly answering the challenge.
Whereas you just flat ignore my arguments.



Quote:
I'm not saying that I'm right and you're wrong on what may ultimately be an article of faith, but I am saying that this particular argument is deeply flawed.
If that's the case, then you should be able to point out its flaws. But instead, what you're doing is basically assuming it's wrong, somehow, because something might occur to make it wrong, somehow.

Your position essentially amounts to an unfalsifiable hope. No matter what I say, how many systems I show where it's patently obvious that their computational interpretation isn't fixed, you can always claim that just beyond that, stuff will start happening.

But without any positive argument to that end, you simply haven't made any contenful claim at all. For any example of emergence, you can always point to the microscopic properties grounding the emergent properties. The hydrogen bonds leading to water's fluidity. The rules any member of a flock follows to generate large-scale flocking behavior.

You offer nothing of the sort; you point to emergence and complexity as essentially magically bringing about what you need. But there's no reason to believe it will, beyond faith.

On the other hand, I have given reason to believe that computation is not an objective notion---by example. It's also straightforward to show how making the system more complex leads to increasing the underdetermination: add a switch, and the number of possible interpretation will grow as the number of possible ways to associate inputs with outputs, with no mitigation in sight.


I acknowledge that emergence is a powerful notion. But it's not a one-size-fits-all magical problem solver. The properties at the bottom level determine those at the higher levels; anything else is just magical thinking. Any claim towards the emergence of mind must thus be bolstered with at least a candidate property that might give rise to the mental. Absent this, there simply is no challenge for me to meet, because you've just failed to make any contenful claim whatever.
  #122  
Old 05-19-2019, 05:13 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Quote:
Originally Posted by Half Man Half Wit View Post
If that's the case, then you should be able to point out its flaws ...

... But without any positive argument to that end, you simply haven't made any contenful claim at all. For any example of emergence, you can always point to the microscopic properties grounding the emergent properties. The hydrogen bonds leading to water's fluidity. The rules any member of a flock follows to generate large-scale flocking behavior.
First, a preface. I've had a number of discussions with you and enjoyed all of them, and I've learned a lot, particularly about quantum mechanics. And for that, thank you.

Now on this, I have pointed out its flaws, several times, and the second part of that quote is just flat-out wrong. You're describing the phenomenon generally referred to as weak emergence -- properties that may or may not be inferrible from those of the constituent components. Now, while I may not agree with David Chalmers on various issues, at least he has his definitions right on strong emergence [PDF, emphasis mine]: "We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain."

I would then point out the example of an electronic calculator that is built from logic gates. It performs calculations, but it's not hard to show that there is nothing about this device that is even remotely "intelligent" in any meaningful sense of the word. It does arithmetic. One could even posit creative multiple interpretations of its results that are other than arithmetic. It's a lot like your box with lights and switches.

But in a very true and fundamental sense, systems like Deep Blue and subsequent chess champions were built out of the same kinds of logical devices. So was Watson, the Jeopardy champion. And they are fundamentally and qualitatively different systems from any calculator. Would you like to posit an alternative interpretation of Watson's computational results? You can't, not because it's such a complex system that it's hard to do, it's because it's qualitatively a different kind of system entirely.

Last edited by wolfpup; 05-19-2019 at 05:18 PM.
  #123  
Old 05-19-2019, 05:35 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by wolfpup View Post
First, a preface. I've had a number of discussions with you and enjoyed all of them, and I've learned a lot, particularly about quantum mechanics. And for that, thank you.

Now on this, I have pointed out its flaws, several times, and the second part of that quote is just flat-out wrong. You're describing the phenomenon generally referred to as weak emergence -- properties that may or may not be inferrible from those of the constituent components. Now, while I may not agree with David Chalmers on various issues, at least he has his definitions right on strong emergence [PDF, emphasis mine]: "We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain."

I would then point out the example of an electronic calculator that is built from logic gates. It performs calculations, but it's not hard to show that there is nothing about this device that is even remotely "intelligent" in any meaningful sense of the word. It does arithmetic. One could even posit creative multiple interpretations of its results that are other than arithmetic. It's a lot like your box with lights and switches.

But in a very true and fundamental sense, systems like Deep Blue and subsequent chess champions were built out of the same kinds of logical devices. So was Watson, the Jeopardy champion. And they are fundamentally and qualitatively different systems from any calculator. Would you like to posit an alternative interpretation of Watson's computational results? You can't, not because it's such a complex system that it's hard to do, it's because it's qualitatively a different kind of system entirely.
Strong emergence is a very contentious notion, and to be honest, having to appeal to it rather weakens your position. While it's true that a strongly emergent property can't even in principle be inferred from lower-level properties, that also means that knowledge of the lower-level properties can never yield sufficient reason for belief in strongly emergent features---so we're back with faith.

On the whole, the main idea behind computationalism and other physicalist ideas is essentially a rejection of such notions. So until you can point to any example of strong emergence (and no computer ever will yield one, since the breaking down of their large-scale 'intelligent' behavior into elementary logical operations is kind of their point, and their very computational nature entails the deducibility of this behavior from the lower level), the default position ought to be a strong skepticism. I tend to agree with this:

Quote:
Originally Posted by Mark Bedau
Although strong emergence is logically possible, it is uncomfortably like magic. How does an irreducible but supervenient downward causal power arise, since by definition it cannot be due to the aggregation of the micro-level potentialities? Such causal powers would be quite unlike anything within our scientific ken. This not only indicates how they will discomfort reasonable forms of materialism. Their mysteriousness will only heighten the traditional worry that emergence entails illegitimately getting something from nothing.

Last edited by Half Man Half Wit; 05-19-2019 at 05:39 PM.
  #124  
Old 05-19-2019, 05:45 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
From the paper of Chalmers you linked to: "Strong emergence, if it exists, can be used to reject the physicalist picture of the world as fundamentally incomplete."

I tend to want to keep the option of physicalism alive. If you call yourself a computationalist, I would have expected that so do you. So do you believe that Deep Blue falsifies physicalism?

Last edited by Half Man Half Wit; 05-19-2019 at 05:45 PM.
  #125  
Old 05-19-2019, 06:42 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Quote:
Originally Posted by Half Man Half Wit View Post
From the paper of Chalmers you linked to: "Strong emergence, if it exists, can be used to reject the physicalist picture of the world as fundamentally incomplete."

I tend to want to keep the option of physicalism alive. If you call yourself a computationalist, I would have expected that so do you. So do you believe that Deep Blue falsifies physicalism?
Of course not. And Chalmers is not saying that strong emergence is a rejection of physicalism, he's saying it undermines it as a complete description, meaning AIUI that there arises the possibility of some system that is identical to another system in all physical respects, yet differs from it in some observable functional/behavioral aspect. Not being a believer in magic or mysticism, I think this is nonsense. Each and every behavioral aspect, whether in a human or a machine, has a corresponding physical mental or computational state.

That state, however, might not be found in any of its discrete components. It might only be found in some vague network of interconnections between distant neurons, or the data paths between software modules, or new data structures that the system itself developed, any of which might have been dynamically established (the latter perhaps in a manner unknown and unpredicted by the designers). Actually a very simple example with Watson was simply the result of its extensive training exercises. In a real sense, no one fully understood what the hell was going in there in terms of the detailed evolution of its database as it was being trained, but the system was gradually getting smarter.

I note BTW that Chalmers also cites consciousness as the only known example (in his view) of strong emergence. I'll leave you guys to fight that out, since you objected to that idea so strongly!
  #126  
Old 05-20-2019, 12:27 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by wolfpup View Post
Of course not. And Chalmers is not saying that strong emergence is a rejection of physicalism, he's saying it undermines it as a complete description, meaning AIUI that there arises the possibility of some system that is identical to another system in all physical respects, yet differs from it in some observable functional/behavioral aspect. Not being a believer in magic or mysticism, I think this is nonsense.
Well, you can't have your cake and eat it. Either, the physical facts suffice to determine all the facts about a system: then, there's no strong emergence. Or, they don't: then, physicalism is wrong.

Quote:
Actually a very simple example with Watson was simply the result of its extensive training exercises. In a real sense, no one fully understood what the hell was going in there in terms of the detailed evolution of its database as it was being trained, but the system was gradually getting smarter.
Computers are essentially paradigm examples of weak emergence (so much so that it's often defined in terms of what a computer simulation of a system includes). Witness Bedau's definition of weak emergence in a system S (original emphasis):
Quote:
Originally Posted by Mark Bedau
Macrostate P of S with microdynamic D is weakly emergent iff P can be derived from D and S's external conditions but only by simulation.
All computers ever do is to deduce higher-level facts (their behavior) from lower-level facts (their programming). You could print out Watson's machine code, and everything it does follows from those instructions; and, while no human being is likely smart enough to perform the derivation, a sufficiently advanced intellect (think Laplace's demon) would have no trouble at all to predict how Watson reacts in every situation. The very fact that Watson is a computer ensures it to be so, as it entails that there's another computer capable of simulating Watson.

So computationalism can never include strong emergence. That would mean to both believe that a computer could simulate a brain, leading to conscious experience, and that a computer simulation of a brain would lack certain aspects of a real mind (the strongly emergent ones).

Quote:
Originally Posted by wolfpup View Post
I note BTW that Chalmers also cites consciousness as the only known example (in his view) of strong emergence. I'll leave you guys to fight that out, since you objected to that idea so strongly!
I have no qualms with Chalmers; he puts forward a consistent position by acknowledging the rejection of physicalism his belief in strong emergence entails. I still think he's wrong, but there's a substantial question to whether he is.

Last edited by Half Man Half Wit; 05-20-2019 at 12:27 AM.
  #127  
Old 05-20-2019, 01:24 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,742
I was thinking about functions other than consciousness performed by the brain and I'm wondering how they fit in with this argument against computation being the basis of consciousness.

For example, consider circadian rhythm computations:
Relative to it's environment (sensory input, other functional components of the brain), it serves a specific purpose. Despite the fact that your box example applies when this function is viewed in isolation, when viewed relative to it's surrounding environment, the specific purpose becomes realized.

Why is consciousness different than the circadian rhythm function?

Can't we say that consciousness is just a particular transformation relative to it's surrounding environment (the inputs into the consciousness function and the outputs from the function)?
  #128  
Old 05-20-2019, 01:58 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,456
Quote:
Originally Posted by Half Man Half Wit View Post
Well, that's not quite what I claimed (but I try to make the argument more clearly below). However, you're already conceding the most important element of my stance---that you need an external agency to fix what a system computes. Let's investigate where that leads.
The conversation has ranged pretty far, but suffice to say that my biggest problem is one of terms. Specifically you are using lots of terms that don't mean the same thing to me that they appear to mean to you. It seems entirely possible that these are terms of art that are well understood within the discipline but that I, the layman, have never encountered. Of course the heavy use of such terms without defining or explaining them ensures that I'm staying a layman.

Take "external agency". As far as I know I never said anything about an external agency. I'm saying that the cognitive calculation itself understands the symbols it uses, and that the way it processes those symbols is largely or wholly deterministic based on the state of the cognitive process itself at that moment in time.

And that fact means that you can copy a cognition even if you don't have any idea how it works, simply by ensuring that your copy has the same cognitive calculation operating on comparable hardware with an identical copy of the 'symbols'. You may not have any idea what those symbols you just copied over mean, but the copy of the cognitive process knows what to do with them, because it's an identical copy of the original cognitive process that knows what to do with them.

It's worth noting that it doesn't matter how the cognition works - so long as you accurately copy the cognition process and memory state, the copy will 'work'. You have to copy the whole cognition process and memory state, of course - if the person in question has a soul and you forget to copy the soul then your copy will fail to operate properly to the degree the soul was necessary for operation. But as long as you copy all the parts and memory correctly you're all good.

You do seem to be very interested in talking about reverse-engineering the cognition and the difficulties in doing so, and while that's certainly an interesting topic, this thread is specifically about copying cognition. And you don't have to understand how something works to copy it, as long as you know enough to not fail to copy over something important.
  #129  
Old 05-20-2019, 02:04 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by RaftPeople View Post
Despite the fact that your box example applies when this function is viewed in isolation, when viewed relative to it's surrounding environment, the specific purpose becomes realized.
If I understand you correctly, then this reply doesn't work: what matters with respect to the environment is what I've earlier on called the stimulus-response behavior; but that's just given by a line of causality connecting both. The computation, such as it is, doesn't change it, but---again---is merely an interpretive gloss over and above the physical behavior.

So in my box example, stimuli come in the form of switch flips, and responses are given in the form of lights either lighting up or not. No matter what computation I take the system to perform, this stimulus-response behavior remains the same; so the interaction with the environment is blind towards anything else.

The same goes for things like the circadian rhythm. Nothing is, ultimately, being computed; it's merely the physical evolution in response to causal factors that triggers, say, time-sensitive hormone release. Picture one of these old wind-up kitchen timers: they don't compute the time to ring, the spring merely winds down after a certain time (response), provided it's been wound up (stimulus).
  #130  
Old 05-20-2019, 02:11 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,742
Quote:
Originally Posted by Half Man Half Wit View Post
If I understand you correctly, then this reply doesn't work: what matters with respect to the environment is what I've earlier on called the stimulus-response behavior; but that's just given by a line of causality connecting both. The computation, such as it is, doesn't change it, but---again---is merely an interpretive gloss over and above the physical behavior.

So in my box example, stimuli come in the form of switch flips, and responses are given in the form of lights either lighting up or not. No matter what computation I take the system to perform, this stimulus-response behavior remains the same; so the interaction with the environment is blind towards anything else.

The same goes for things like the circadian rhythm. Nothing is, ultimately, being computed; it's merely the physical evolution in response to causal factors that triggers, say, time-sensitive hormone release. Picture one of these old wind-up kitchen timers: they don't compute the time to ring, the spring merely winds down after a certain time (response), provided it's been wound up (stimulus).
Understood.


But, how do we know consciousness isn't just a set of transformations that just happen to help the machine successfully respond to it's environment in the same way that the circadian rhythm function does? It may feel to us like it's something more, but maybe it's not.
  #131  
Old 05-20-2019, 02:16 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by begbert2 View Post
Take "external agency". As far as I know I never said anything about an external agency. I'm saying that the cognitive calculation itself understands the symbols it uses, and that the way it processes those symbols is largely or wholly deterministic based on the state of the cognitive process itself at that moment in time.
In that case, I believe I misunderstood you. I thought that the 'brain' you were referring to was that of an external observer, who interprets the lights (as brains typically don't have lights). But I understand now that you intended to use 'lights' metaphorically, for whatever symbolic vehicles the brain itself carries (right?).

Quote:
And that fact means that you can copy a cognition even if you don't have any idea how it works, simply by ensuring that your copy has the same cognitive calculation operating on comparable hardware with an identical copy of the 'symbols'. You may not have any idea what those symbols you just copied over mean, but the copy of the cognitive process knows what to do with them, because it's an identical copy of the original cognitive process that knows what to do with them.
However, this reply simply won't work, either. If it's the computation that's supposed to fix what the brain computes, then we have a chicken-and-egg problem: for a brain to give rise to a mind, it must, on computationalism, implement some computation M. I have now argued that whether a brain (or any physical system) implements a computation is not an objective property of that brain, and thus, subject to interpretation.

To that, you reply (if I understand you correctly) that so what, to the mind produced by the brain, what computation is being performed is perfectly definite, it's just that an outside observer can't tell which. But that's circular: in order for the mind to fix the computation like that, it would first have to be the case that the brain, indeed, gives rise to that mind; but for that, it must implement computation M. So before it can be the case that the 'cognitive calculation' itself understands the symbols it uses, it must be the case that the brain performs that 'cognitive calculation' (i. e., implements M). So your reply appeals to the brain implementing M in order to argue that it implements M.

Quote:
You do seem to be very interested in talking about reverse-engineering the cognition and the difficulties in doing so, and while that's certainly an interesting topic, this thread is specifically about copying cognition. And you don't have to understand how something works to copy it, as long as you know enough to not fail to copy over something important.
The thread, as I understand it, is about a specific kind of copying, namely, via download; this implies the instantiation of the mind within a computer. This, however, is not possible.

I have no qualms with an exact physical replica of my brain being conscious in the same way as I am, if that's what your saying.
  #132  
Old 05-20-2019, 02:18 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by RaftPeople View Post
Understood.


But, how do we know consciousness isn't just a set of transformations that just happen to help the machine successfully respond to it's environment in the same way that the circadian rhythm function does? It may feel to us like it's something more, but maybe it's not.
That's a different claim from the one computationalism makes, though. I'm not entirely sure about what kind of claim, though---what, exactly, do you mean by consciousness being 'a set of transformations'? To me, consciousness is far more a certain way of being, namely, one where it is like something to be me, where I have qualitative subjective experience. What kind of transformations are you thinking of?
  #133  
Old 05-20-2019, 02:25 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Some parting thoughts (maybe).

Quote:
Originally Posted by Half Man Half Wit View Post
Well, you can't have your cake and eat it. Either, the physical facts suffice to determine all the facts about a system: then, there's no strong emergence. Or, they don't: then, physicalism is wrong.
I disagree, but then I probably have a different notion of "emergence" than philosophers like Chalmers. A system can certainly have properties that are not present in any of its components yet are still embodied in its physicality. One simply posits that such properties arise from the arrangement of those components, meaning the interconnections between them, and indeed that's the only place that real emergent properties *can* reside. This arrangement may be by design, or it may be a product of the system's own self-configuration.

This is roughly what happens when logic gates are assembled into a digital computer. The business of being able to "infer" from component properties what the properties of the aggregate system will be is really rather nebulous and arbitrary, and consequently so is the distinction between weak and strong emergence, IMO. One might readily infer that since logic gates switch signals according to logical rules, it's reasonable to expect that the resulting system would be an ace at doing binary arithmetic. But is it reasonable to infer on that same basis that those same logic gates would be the foundation for a system capable of playing grandmaster-level chess, long believed to be the exclusive domain of a high caliber of human intelligence? Or one that would beat Ken Jenning at Jeopardy? If so, Hubert Dreyfus and a whole following of like-minded skeptics sure as hell didn't infer it!


Quote:
Originally Posted by Half Man Half Wit View Post
All computers ever do is to deduce higher-level facts (their behavior) from lower-level facts (their programming). You could print out Watson's machine code, and everything it does follows from those instructions; and, while no human being is likely smart enough to perform the derivation, a sufficiently advanced intellect (think Laplace's demon) would have no trouble at all to predict how Watson reacts in every situation. The very fact that Watson is a computer ensures it to be so, as it entails that there's another computer capable of simulating Watson.

So computationalism can never include strong emergence. That would mean to both believe that a computer could simulate a brain, leading to conscious experience, and that a computer simulation of a brain would lack certain aspects of a real mind (the strongly emergent ones).
The conclusion here, taking into account all that it implies, is so wrong in my view that it doesn't seem sufficient to say that I disagree with it; I respectfully have to say that I'm just astounded that you're saying it. Wrapped up in that statement -- some of which I extrapolate from your earlier claims -- appear to be the beliefs that (a) nothing (or at least nothing of cognitive significance) in the brain is computational, (b) a computer can never simulate a brain, and (c) a computer can never exhibit self-awareness (consciousness). All of which are wrong, in my view, though they are increasingly arguable. But the first of those, if taken seriously, is a flippant dismissal of the entirety of the computational theory of cognition, one that Fodor has described as "far the best theory of cognition that we've got; indeed, the only one we've got that's worth the bother of a serious discussion".

And the basis for your bizarre conclusion appears to be the belief that any computation requires an external agent to fix an interpretation -- a belief that I maintain has already been disproved by the simple fact that all interpretations that are consistent with the computational results are all exactly equivalent. The claim that the mind cannot be computational because of the "external agent" requirement is a futile attempt to parallel the homunculus argument as it's sometimes applied to the theory of vision. Clearly, however, vision is actually a thing, so somewhere along the line the attempt to prove a fallacy has gone off the rails. Likewise with your claim about the computational aspects of cognition. It's exactly the homunculus fallacy, and it's a fallacy because computational results are intrinsically objective -- that is to say, they are no more and no less intrinsically objective than the semantics attached to the symbols.

Your first paragraph here is also frustrating to read. It is, at best, just one step removed from the old saw that "computers can only do what they're programmed to do", which is often used to argue that computers can never be "really" intelligent like we are. That's right, in a way: the reality is that computers can be a lot more intelligent than we are! The fact that in theory a sufficiently advanced intellect or another computer, given all the code and the data structures (the state information) in Watson, could in fact predict exactly what Watson would do in any situation is true, but it's also irrelevant as a counterargument to emergence because it's trivially true: all it says is that Watson is deterministic, and we already knew that.

But here's the kicker: I would posit that exactly the same statement could be made in principle about the human brain. In any given situation and instant in time one could in theory predict exactly how someone will respond to a given stimulus. There's merely a practical difficulty in extracting and interpreting all the pertinent state information. Unless you don't believe that the brain is deterministic -- but that would be an appeal to magic. This is aside from issues of random factors affecting synaptic potential, and changes therein due to changes in biochemistry, and all the other baggage of meat-based logic. But those are just issues of our brains being randomly imperfect. That a defective computer may be unpredictable is neither a benefit nor an argument against computational determinism.

A final point here, for the record, is that in retrospect the digression about strong emergence was a red herring. The kinds of things I was talking about are better described as synergy, which is synonymous with weak emergence, if one wants to bother with the distinction at all. The impressive thing about Watson is not particularly the integration of its various components -- speech recognition, query decomposition, hypothesis generation, etc. -- as these are all purpose-built components of a well-defined architecture. The impressive thing is how far removed the system is from the underlying technology: the machine instructions, and below that, the logic gates inside the processors. The massively parallel platform on which Watson runs is very far removed from a calculator, yet in principle it's built from exactly the same kinds of components.

The principle here is, as I said earlier, that a sufficiently great increase in the quantitative nature of a system's complexity leads to fundamental qualitative changes in the nature of the system. Among other things, dumb, predictable systems can become impressive intelligent problem-solvers. This, in my view, is the kind of emergence that accounts for most of the wondrous properties of the brain, and not some fundamentally different, mysterious processes.

Consciousness is very likely just another point on this continuum, but we've thrashed that one to death. Marvin Minsky used to say that consciousness is overrated -- that what we think of as our awesome power of self-awareness is mostly illusory. Indeed, we obviously have almost no idea at all of what actually goes on inside our own heads. IMHO Chalmers' physicalism arguments about it are just philosophical silliness. Where does consciousness reside? Nowhere. It's just a rather trivial consequence of our ability to reason about the world.

Last edited by wolfpup; 05-20-2019 at 02:25 PM.
  #134  
Old 05-20-2019, 03:46 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Quote:
Originally Posted by wolfpup View Post
A system can certainly have properties that are not present in any of its components yet are still embodied in its physicality. One simply posits that such properties arise from the arrangement of those components, meaning the interconnections between them, and indeed that's the only place that real emergent properties *can* reside.
Just want to correct rather a whopper of an omission there. That should say "One simply posits that such properties arise from the arrangement of those components, meaning the interconnections between them, or in the states in and among them, and indeed that's the only place that real emergent properties *can* reside".
  #135  
Old 05-20-2019, 03:51 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,456
Quote:
Originally Posted by Half Man Half Wit View Post
In that case, I believe I misunderstood you. I thought that the 'brain' you were referring to was that of an external observer, who interprets the lights (as brains typically don't have lights). But I understand now that you intended to use 'lights' metaphorically, for whatever symbolic vehicles the brain itself carries (right?).
Pretty much. My argument is that the brain can interpret its own symbols, and an exact copy of that brain can interpret those same symbols with the same effects.

Quote:
Originally Posted by Half Man Half Wit View Post
However, this reply simply won't work, either. If it's the computation that's supposed to fix what the brain computes, then we have a chicken-and-egg problem: for a brain to give rise to a mind, it must, on computationalism, implement some computation M. I have now argued that whether a brain (or any physical system) implements a computation is not an objective property of that brain, and thus, subject to interpretation.

To that, you reply (if I understand you correctly) that so what, to the mind produced by the brain, what computation is being performed is perfectly definite, it's just that an outside observer can't tell which. But that's circular: in order for the mind to fix the computation like that, it would first have to be the case that the brain, indeed, gives rise to that mind; but for that, it must implement computation M. So before it can be the case that the 'cognitive calculation' itself understands the symbols it uses, it must be the case that the brain performs that 'cognitive calculation' (i. e., implements M). So your reply appeals to the brain implementing M in order to argue that it implements M.
Why does the symbol '3' refer to the number three? It just does. It's the symbol we've collectively chosen. I have to appeal to the system of arabic numbering in order to argue what the system of arabic numbering is, because I have no other cite for the definition of '3' besides the definition of '3'. Yes I know that there are historical reasons that the drawing of a butt means 'three', but I don't know those historical reasons and that doesn't prevent me from using 3 as a symbol.

A given mind, a given cognition, knows what it means by a given symbol/encoding. Let's call that cognition M. M happens to function in such a way that red tennis shoes are encoded with a specific code which I'll refer to as RTS. Other cognitions might interpret the RTS code differently, perhaps interpreting it to mean 'real-time strategy game', but that's not a problem for cognition M - M knows that RTS means 'red tennis shoes'. Why does RTS mean red tennis shoes? Because it does. That's the symbol M uses for that.

There's absolutely no part of this that I perceive to be a problem. If you are seeing a problem here, then I posit either you are talking about something different than this, or there's a problem with your logic.


Quote:
Originally Posted by Half Man Half Wit View Post
The thread, as I understand it, is about a specific kind of copying, namely, via download; this implies the instantiation of the mind within a computer. This, however, is not possible.

I have no qualms with an exact physical replica of my brain being conscious in the same way as I am, if that's what your saying.
Of course it's possible, theoretically speaking. To say otherwise is absurd, because theoretically speaking you can emulate a model of reality itself (or at least a local chunk of it) within the computer. Theoretically speaking you can theorize that you have all the memory and processing power you need to emulate, say, a 10'x10'x10' room at the level of the physical behavior of the elementary physical particles. And that 10'x10'x10' room could include within it a copy of you.

Which means that, theoretically speaking, you absolutely can create an exact physical replica of your brain within the simulation within the computer. So you bet your bunions that your cognition is digitizable.
  #136  
Old 05-20-2019, 03:53 PM
BwanaBob's Avatar
BwanaBob is offline
Member
 
Join Date: Feb 2003
Location: Maryland
Posts: 4,247
Quote:
Originally Posted by Chronos View Post
BwanaBob, you already lack that continuity. Does going to sleep every night terrify you?
No. It's not the same thing. There is a continuity of brain wave activity. That is satisfactory for me. If your saying that the transfer could happen during "sleep" I'd balk.
__________________
Go wherever you can be
And live for the day
It's only wear and tear
-IQ
  #137  
Old 05-20-2019, 05:41 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by wolfpup View Post
I disagree, but then I probably have a different notion of "emergence" than philosophers like Chalmers.
Quote:
Originally Posted by wolfpup View Post
Now, while I may not agree with David Chalmers on various issues, at least he has his definitions right on strong emergence
So, does it bother you at all that you have to flat out contradict yourself in the span of three posts to try and save your position?

Quote:
This is roughly what happens when logic gates are assembled into a digital computer. The business of being able to "infer" from component properties what the properties of the aggregate system will be is really rather nebulous and arbitrary, and consequently so is the distinction between weak and strong emergence, IMO.
Well, it's really not, though. I gave the definition (or at least, a very widely accepted definition) above---if you can discover it via simulation, it's (at best) weakly emergent. The reason for this definition is that in such a case, you only need to apply rote operations of logical deduction to get at the 'emergent' properties; so there's nothing new in that sense. The higher-level properties stand to the lower-level properties in a relation of strict logical implementation, or, in other words, there's nothing qualitatively new whatsoever.

This isn't something I've made up, you know. But I don't think heaping on more cites would help any, seeing how you've already not bothered to address the one I provided.

Quote:
One might readily infer that since logic gates switch signals according to logical rules, it's reasonable to expect that the resulting system would be an ace at doing binary arithmetic. But is it reasonable to infer on that same basis that those same logic gates would be the foundation for a system capable of playing grandmaster-level chess, long believed to be the exclusive domain of a high caliber of human intelligence? Or one that would beat Ken Jenning at Jeopardy? If so, Hubert Dreyfus and a whole following of like-minded skeptics sure as hell didn't infer it!
Dreyfus might have been wrong on some things, but even most proponents of the possibility of strong artificial intelligence today acknowledge that his criticisms against 'good old fashioned AI' (GOFAI) were largely on point. Hence, the move towards subsymbolic and connectionist approaches to replace expert systems and the like.

But that's rather something of a tangent in this discussion. The basic point is that, of course it's reasonable to think of chess playing as being just the same kind of thing as binary arithmetic. After all, that's what a computer program for playing chess is: a reduction of chess playing to performing binary logical operations. Really complicated ones, but again, that's the definition of a difference merely in quantity, not quality.

Quote:
The conclusion here, taking into account all that it implies, is so wrong in my view that it doesn't seem sufficient to say that I disagree with it; I respectfully have to say that I'm just astounded that you're saying it.
In contrast to you, however, I'm not merely saying it, stating my ideas as if they were just obvious even in the face of widespread disagreement in the published literature, but rather, provide arguments supporting them. Which are then summarily ignored as my interlocutors just flat out state their positions as if they were just obviously true.

Quote:
But the first of those, if taken seriously, is a flippant dismissal of the entirety of the computational theory of cognition, one that Fodor has described as "far the best theory of cognition that we've got; indeed, the only one we've got that's worth the bother of a serious discussion".
I'm sure some luminary once described caloric as the best theory of work and heat we've got. But that doesn't mean there's anything to that notion.

Quote:
And the basis for your bizarre conclusion appears to be the belief that any computation requires an external agent to fix an interpretation -- a belief that I maintain has already been disproved by the simple fact that all interpretations that are consistent with the computational results are all exactly equivalent.
I have addressed that issue, conclusively dispelling it: if you consider the computations I proposed to be equivalent, then computationalism just collapses to naive identity physicalism. Besides of course the sheer chutzpah of considering the manifestly different functions I've proposed, which are different on any formalization of computation ever proposed, and which are just quite obviously distinct sorts of operations, to be in any way, shape, or form, 'the same'. The function f' is not binary addition, but it, just as well as addition, is obviously an example of a computation. That I should have to point this out is profoundly disconcerting.

Quote:
The claim that the mind cannot be computational because of the "external agent" requirement is a futile attempt to parallel the homunculus argument as it's sometimes applied to the theory of vision. Clearly, however, vision is actually a thing, so somewhere along the line the attempt to prove a fallacy has gone off the rails.
The homunculus argument succeeds in pointing out a flaw with certain simple representational theories of vision, which have consequently largely been discarded. Pointing out the occurrence of vicious infinite regresses is a common tool in philosophy, and all I'm doing is pointing out that it trivializes the computational theory of mind.

Quote:
It's exactly the homunculus fallacy, and it's a fallacy because computational results are intrinsically objective -- that is to say, they are no more and no less intrinsically objective than the semantics attached to the symbols.
This is a bizarre statement. Quite clearly, the semantics of symbols is explicitly subjective. There is nothing about the word 'dog' that makes it in any sense objectively connect to four-legged furry animals. Likewise, there is nothing about a light that makes it intrinsically mean '1' or '0'.

Quote:
But here's the kicker: I would posit that exactly the same statement could be made in principle about the human brain.
Sure. That's not much of a kicker, though. After all, it's just a restatement of the notion that there's no strong emergence in the world.

Quote:
Unless you don't believe that the brain is deterministic -- but that would be an appeal to magic.
It doesn't really matter, but indeterminism doesn't really entail 'magic'. On many of its interpretations, quantum mechanics is intrinsically indeterministic; that doesn't make it any less of a perfectly sensible physical theory.

Quote:
The kinds of things I was talking about are better described as synergy, which is synonymous with weak emergence, if one wants to bother with the distinction at all. The impressive thing about Watson is not particularly the integration of its various components -- speech recognition, query decomposition, hypothesis generation, etc. -- as these are all purpose-built components of a well-defined architecture. The impressive thing is how far removed the system is from the underlying technology: the machine instructions, and below that, the logic gates inside the processors. The massively parallel platform on which Watson runs is very far removed from a calculator, yet in principle it's built from exactly the same kinds of components.
And that's not surprising in any way, because it's doing qualitatively exactly the same kind of thing---just more of it.

Look, I get that the successes of modern computers look impressive. But for anything a computer does, there's a precise story of how this behavior derives from the lower level properties. I might not be able to tell the story of how Watson does what it does, but I know exactly what this story looks like---it looks exactly the same as for a pocket calculator, or my device above. Describing the functional components of a computer enable us to see exactly what computers are able to do. Turing did just that with his eponymous machines; ever since, we have exactly known how any computer does what it does. There's no mystery there.

That's the sort of story you'd have to tell to make your claim regarding the emergence of consciousness have any sort of plausibility. But instead, you're doing the exact opposite: you try to use complexity to hide, not to elucidate, how consciousness works. You basically say, we can't tell the full story, so we can't tell any story at all, so who knows, anything might happen really, even consciousness. It's anyone's guess!

Quote:
The principle here is, as I said earlier, that a sufficiently great increase in the quantitative nature of a system's complexity leads to fundamental qualitative changes in the nature of the system.
You keep claiming this, but any example you give establishes the exact opposite: that there is no fundamental qualitative difference between the components and the full system. The components exactly logically entail the properties of the whole; so they are fundamentally the same kind of thing.

Nevermind that even if there were some sort of qualitative difference, this would still not make any headway at all against my argument (that, I'll just point out once again for the fun of it, you still haven't actually engaged with)---at best, the resulting argument would be something like: the simple example system doesn't possess any fixed computational interpretation; however, qualitatively novel phenomena emerge once we just smoosh more of that together. So maybe some of these qualitatively novel phenomena are just gonna solve that problem in some way we don't know.

That is, even if you were successful in arguing for qualitative novelty in large-scale computational systems, the resulting argument would at best be a fanciful hope.

Quote:
Originally Posted by begbert2 View Post
Pretty much. My argument is that the brain can interpret its own symbols, and an exact copy of that brain can interpret those same symbols with the same effects.
And that's already where things collapse. If the interpretation of this symbols is based on computation, then the symbols must already be interpreted beforehand, or otherwise, there just won't be any computation to interpret them.

Quote:
Why does the symbol '3' refer to the number three? It just does. It's the symbol we've collectively chosen.
Sure. But the question is, how does this choosing work? How does one physical vehicle come to be about, or refer to, something beyond itself? In philosophical terms, this is the question of intentionality---the mind's other problem.

Quote:
A given mind, a given cognition, knows what it means by a given symbol/encoding. Let's call that cognition M. M happens to function in such a way that red tennis shoes are encoded with a specific code which I'll refer to as RTS.
The problem is, rather, that M is tasked with interpreting the symbols that make the brain compute M. Consequently, the brain must be computing M before M can interpret the brain as computing M. Do you see that this is slightly problematic?

Quote:
Of course it's possible, theoretically speaking. To say otherwise is absurd, because theoretically speaking you can emulate a model of reality itself (or at least a local chunk of it) within the computer.
Well, that's just massively question-begging. In point of fact, nothing ever just computes anything; systems are interpreted as computing something. I mean, I've by now gotten used to people just ignoring the actual arguments I make in favor of posturing and making unsubstantiated claims, but go back to the example I provided. There, two different computations are attributed, on equivalent justification, to one and the same physical system. Thus, what computation is performed isn't an objective property of the system anymore than the symbol '3' denoting a certain number is a property of that symbol.

Quote:
Which means that, theoretically speaking, you absolutely can create an exact physical replica of your brain within the simulation within the computer. So you bet your bunions that your cognition is digitizable.
No. You can interpret certain systems as implementing a simulation of a brain. That doesn't mean the system actually is one. You can interpret an orrery as a model of the solar system. That doesn't mean it actually is a solar system.

All of this is just a massive case of confusing the map for the territory. What you're saying is exactly equivalent, for example, to saying that there are certain symbols such that they have only one objectively correct meaning.
  #138  
Old 05-20-2019, 06:16 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,456
Quote:
Originally Posted by Half Man Half Wit View Post
And that's already where things collapse. If the interpretation of this symbols is based on computation, then the symbols must already be interpreted beforehand, or otherwise, there just won't be any computation to interpret them.
Are you asserting that it's impossible for any computational system to assign symbols to new concepts as it encounters them? Because computer programs do that all the time. And when I say "all the time", I mean that literally - there are databases and hashsets creating new records with associated programmatically assigned keys constantly.

Quote:
Originally Posted by Half Man Half Wit View Post
Sure. But the question is, how does this choosing work? How does one physical vehicle come to be about, or refer to, something beyond itself? In philosophical terms, this is the question of intentionality---the mind's other problem.

The problem is, rather, that M is tasked with interpreting the symbols that make the brain compute M. Consequently, the brain must be computing M before M can interpret the brain as computing M. Do you see that this is slightly problematic?

Well, that's just massively question-begging. In point of fact, nothing ever just computes anything; systems are interpreted as computing something. I mean, I've by now gotten used to people just ignoring the actual arguments I make in favor of posturing and making unsubstantiated claims, but go back to the example I provided. There, two different computations are attributed, on equivalent justification, to one and the same physical system. Thus, what computation is performed isn't an objective property of the system anymore than the symbol '3' denoting a certain number is a property of that symbol.
Let me respond to all this in a very simple way - one of two things is happening here. You are either:

1) positing that minds can't possibly work and are entirely fictional, which I believe can be dismissed as absurd based on observations,

or

2) positing that we can't reverse engineer brains operation through external observation, which is irrelevant and off topic because we don't have to know how they work to copy them if we copy them at the physical level to the smallest excruciating detail.

Because of your persistent use of undefined technical terms I'm not quite sure about which of these you are doing, but either way I don't care - you can't prove brains aren't copyable either way.

Quote:
Originally Posted by Half Man Half Wit View Post
No. You can interpret certain systems as implementing a simulation of a brain. That doesn't mean the system actually is one. You can interpret an orrery as a model of the solar system. That doesn't mean it actually is a solar system.

All of this is just a massive case of confusing the map for the territory. What you're saying is exactly equivalent, for example, to saying that there are certain symbols such that they have only one objectively correct meaning.
We're not talking about orrerys and maps, and you know it - we're talking about functionally exact copies. At a functional level the digital copy would operate exactly the same way the original physical person did. So forget all the crappy analogies, please.

From the perspective of the copy, the duplication is exact, down to the smallest detail. Every neuron and chemical and electron is in place, acting exactly like their equivalents in the physical world. It's essentially the 'prosthetic neuron replacement' scenario from earlier in the thread - the prosthetic neurons (and everything else) are simulated entities, but they replicate the functionality of the things they replace perfectly.

Simulations seek to replicate the behavior and outcomes of what they simulate. The more accurate the simulation, the more accurate the outcomes are to the real thing. Here we theoretically posit a perfect simulation of the physical reality that the brain (and the body that houses it) exist in. Basically the Matrix, except the brain cells are inside the simulation too. And presuming a materialist real universe, there is no coherent and informed argument that the simulation couldn't be accurate to the finest detail - including the behavior of the simulated people, driven by their simulated brains and the minds contained within them.
  #139  
Old 05-20-2019, 08:42 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,742
HMHW, I'm curious which definition of computation (or computationalism if it's a different animal) you are assuming?

While reading up on these topics, it seems there are multiple definitions, this is one thing I was reading:
http://www.umsl.edu/~piccininig/Comp...hy_of_Mind.pdf
  #140  
Old 05-21-2019, 12:37 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by begbert2 View Post
Are you asserting that it's impossible for any computational system to assign symbols to new concepts as it encounters them?
No. And, frankly, your continued misconstrual of my argument is somewhat disconcerting to me. I've given an explicit example of what I'm arguing in post #93: there, I have shown how one and the same physical system can be considered, on equal grounds, to perform distinct computations (binary addition---f---and the function f').

Hence, what computation a physical system performs---whether it performs any computation at all---isn't an objective matter of fact about the physical system. You can't say the device I proposed performs binary addition; you can only say that you can interpret it as such.

But then, a dismissal of the computational theory of mind follows immediately. Computationalism holds that brains give rise to minds via performing the right sort of computation. But if it's the case that brains don't perform any computation at all, unless they are interpreted in the right way, then that whole thing collapses.

So whether or not a computational system assigns symbols to concepts, or what have you, is entirely beside the point. The point is that there's no such thing as a computational system absent an interpretation of a physical system as a computational system.

Quote:
We're not talking about orrerys and maps, and you know it - we're talking about functionally exact copies. At a functional level the digital copy would operate exactly the same way the original physical person did. So forget all the crappy analogies, please.
The analogy is exact in the one respect that matters: neither an orrery nor a map is intrinsically about what we use it to model, but needs to be interpreted as such. The same is true with computation.

The rest of your post unfortunately doesn't really have any connection to anything I've written so far, so I won't reply to it for now, for fear of introducing yet more confusion in the attempt of explaining myself. Really, I implore you, if any of this is still unclear, go back to my example. If you don't understand something, ask. But don't just go on attacking points I never made.

Quote:
Originally Posted by RaftPeople View Post
HMHW, I'm curious which definition of computation (or computationalism if it's a different animal) you are assuming?
Well, as I put it earlier:

Quote:
Originally Posted by Half Man Half Wit View Post
Computation is nothing but using a physical system to implement a computable (partial recursive) function. That is, I have an input x, and want to know the value of some f(x) for a computable f, and use manipulations on a physical system (entering x, pushing 'start', say) to obtain knowledge about f(x).

This is equivalent (assuming a weak form of Church-Turing) to a definition using Turing machines, or lambda calculus, or algorithms. What's more, we can limit us to computation over finite binary strings, since that's all a modern computer does. In this case, it's straightforward to show that the same physical system can be used to implement different computations (see below).
Computationalism then is the idea that the way the brain gives rise to a mind is by implementing the right sort of computation.

Last edited by Half Man Half Wit; 05-21-2019 at 12:38 AM.
  #141  
Old 05-21-2019, 09:38 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,885
This is crazy.
Quote:
I have shown how one and the same physical system can be considered, on equal grounds, to perform distinct computations (binary addition---f---and the function f').
Going back to my laptop- it is only performing one ontologially significant computation in order to display the symbols on its screen; we'll call that f if you like. We know that is the one it is performing, because it is the one it is supposed to do. This is the teleological function of my computer, the one it has been designed to do. Now you state that it is also performing f', and that also seems to be true- but that computation does not affect the display on the screen at all - the two events are not causally linked.

Maybe, somewhere out there in an infinite universe, there is an exact replica of my laptop in which (purely by chance) it is f' that causally affects the laptop screen in order to display exactly the same symbols - but this freak laptop, if it exists, is so far away that it is way beyond my personal light cone, probably more than a googolplex metres away. What possible relevance does the computation f' have to anything in the real world?

Last edited by eburacum45; 05-21-2019 at 09:38 AM.
  #142  
Old 05-21-2019, 01:09 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,456
Quote:
Originally Posted by Half Man Half Wit View Post
No. And, frankly, your continued misconstrual of my argument is somewhat disconcerting to me. I've given an explicit example of what I'm arguing in post #93: there, I have shown how one and the same physical system can be considered, on equal grounds, to perform distinct computations (binary addition---f---and the function f').

Hence, what computation a physical system performs---whether it performs any computation at all---isn't an objective matter of fact about the physical system. You can't say the device I proposed performs binary addition; you can only say that you can interpret it as such.

But then, a dismissal of the computational theory of mind follows immediately. Computationalism holds that brains give rise to minds via performing the right sort of computation. But if it's the case that brains don't perform any computation at all, unless they are interpreted in the right way, then that whole thing collapses.

So whether or not a computational system assigns symbols to concepts, or what have you, is entirely beside the point. The point is that there's no such thing as a computational system absent an interpretation of a physical system as a computational system.


The analogy is exact in the one respect that matters: neither an orrery nor a map is intrinsically about what we use it to model, but needs to be interpreted as such. The same is true with computation.

The rest of your post unfortunately doesn't really have any connection to anything I've written so far, so I won't reply to it for now, for fear of introducing yet more confusion in the attempt of explaining myself. Really, I implore you, if any of this is still unclear, go back to my example. If you don't understand something, ask. But don't just go o attacking points I never made.
Minds are self-interpreting. That's kind of the whole point - self-awareness, and all. A mind interprets its own memories, it's own data, its own internal states. The 'function', the complex pattern of dominoes that are constantly bumping up against one another, it's arranged in such a way that it examines its own data and interprets it itself.

This means that the fact that you can interpret its data and outputs sixteen thousand different ways is utterly irrelevant. It doesn't matter at all. It's completely inconsequential. It has no bearing on the discussion whatsoever.

Why? Because it doesn't matter how you interpret the data; it matters how the data interprets the data. And the way the data interprets the data is determined by the arrangement of the data - and at any given moment there's only one arrangement of the data. Which means there's only one interpretation the mind is going to use, and that's the only one that matters.

Now you'll note that in the paragraph above I'm brazenly lumping both the stored data and the 'running program state' under the umbrella term 'data'. This is because as far as the copying process is concerned, it *is* all data, and can be copied and exactly reproduced in a simulation. And when this happens the simulated mind will have the exact same interpretations of its own data as the original did - it will perceive itself the same way the original does, and react the same the original does. it copies all the traits and behaviors and processes and interpretations of the original because it's an exact copy.

Does the copy (or the original) do "computation"? The fuck if I know; I don't know what you mean by the term. What I do know, though, is that if one does it so does the other, and vice versa. The two function identically. Including using the same identical operating processes and self-interpretation.
  #143  
Old 05-21-2019, 02:23 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
I've been away from the board for the past day due to events of actual life, but let me respond briefly to that last volley.
Quote:
Originally Posted by Half Man Half Wit View Post
So, does it bother you at all that you have to flat out contradict yourself in the span of three posts to try and save your position?
It might, were it not for the fact taking those quotes out of context to rob them of their intended meaning merely reduces the discussion to a game of "I Win teh Internets!".

I liked the Chalmers definition for directly contradicting your claim that an emergent property must have visible elements in the underlying components, a claim that I regarded as nonsense. Reading further in the Chalmers paper, however, I don't agree with him on ALL his characterizations of emergent properties, particularly that what he calls "strong emergence" must be at odds with physicality. So my two statements in context are in no way contradictory, but you get three points and a cookie for highlighting them that way.

Quote:
Originally Posted by Half Man Half Wit View Post
This isn't something I've made up, you know. But I don't think heaping on more cites would help any, seeing how you've already not bothered to address the one I provided.
If you're referring to your Mark Bedau quote, that wasn't a cite, it was a cryptic one-liner lacking either context or link.

Quote:
Originally Posted by Half Man Half Wit View Post
In contrast to you, however, I'm not merely saying it, stating my ideas as if they were just obvious even in the face of widespread disagreement in the published literature, but rather, provide arguments supporting them.
Nor am I "merely saying it". I and others have provided arguments, you just don't like them. And speaking of published literature, if you read the cognitive science literature you'll find that CTM is a rather more substantive contribution to the science than a "theory of caloric". See next comment.

Quote:
Originally Posted by Half Man Half Wit View Post
I'm sure some luminary once described caloric as the best theory of work and heat we've got. But that doesn't mean there's anything to that notion.
If, in order to support your silly homunculus argument about computation, you have to characterize one of the most respected figures in modern cognitive science as a misguided crackpot for advancing CTM theory and having the audacity to disagree with you, I hope you realize what that does to your argument. This about sums it up:
The past thirty years have witnessed the rapid emergence and swift ascendency of a truly novel paradigm for understanding the mind. The paradigm is that of machine computation, and its influence upon the study of mind has already been both deep and far-reaching. A significant number of philosophers, psychologists, linguists, neuroscientists, and other professionals engaged in the study of cognition now proceed upon the assumption that cognitive processes are in some sense computational processes; and those philosophers, psychologists, and other researchers who do not proceed upon this assumption nonetheless acknowledge that computational theories are now in the mainstream of their disciplines.
https://publishing.cdlib.org/ucpress...&brand=ucpress
Quote:
Originally Posted by Half Man Half Wit View Post
I have addressed that issue, conclusively dispelling it: if you consider the computations I proposed to be equivalent, then computationalism just collapses to naive identity physicalism. Besides of course the sheer chutzpah of considering the manifestly different functions I've proposed, which are different on any formalization of computation ever proposed, and which are just quite obviously distinct sorts of operations, to be in any way, shape, or form, 'the same'. The function f' is not binary addition, but it, just as well as addition, is obviously an example of a computation. That I should have to point this out is profoundly disconcerting.
What I find disconcerting is that in order to support this argument, you have to discredit arguably one of the most important foundations of cognitive science, with which it is directly at odds.
Quote:
Originally Posted by Half Man Half Wit View Post
This is a bizarre statement. Quite clearly, the semantics of symbols is explicitly subjective. There is nothing about the word 'dog' that makes it in any sense objectively connect to four-legged furry animals. Likewise, there is nothing about a light that makes it intrinsically mean '1' or '0'.
Sure, but that's just the point. The external agent in your switches and lights example is just assigning semantics to the symbols. The underlying computation is invariant regardless of his interpretation, as I keep saying. All semantic assignments that work all describe the same computation, and if you dislike me constantly repeating that, another way of saying it is the self-evident fact that no computation is occurring outside the box, and the presence of your interpretive agent doesn't change the box.

Quote:
Originally Posted by Half Man Half Wit View Post
Look, I get that the successes of modern computers look impressive. But for anything a computer does, there's a precise story of how this behavior derives from the lower level properties. I might not be able to tell the story of how Watson does what it does, but I know exactly what this story looks like---it looks exactly the same as for a pocket calculator, or my device above. Describing the functional components of a computer enable us to see exactly what computers are able to do. Turing did just that with his eponymous machines; ever since, we have exactly known how any computer does what it does. There's no mystery there.
This is a complete misrepresentation of the significance of Turing's insight. Its brilliance was due the fact that it reduced the notion of "computation" (in the computer science sense) to the simplest and most general abstraction, stripping away all the irrelevancies, and allowed us to distinguish computational processes from non-computational ones. It "enable us to see exactly what computers are able to do" only in the very narrow sense of being state-driven automata that are capable of executing stored programs. It tells us absolutely nothing about the scope and the intrinsic limits of what those automata might be able to achieve in terms of, for example, problem-solving skills demonstrating intelligence at or beyond human levels, self-awareness, or creativity. As I said, philosophers like Dreyfus concluded in the 60s that computers would never be able to play better than a child's level of chess, and AFAIK Searle is still going on about his Chinese Room argument proving that computational intelligence isn't "real" intelligence. Turing machines have been known since 1948, but the debate about computational intelligence rages on in the cognitive and computer sciences.
Quote:
Originally Posted by Half Man Half Wit View Post
You keep claiming this, but any example you give establishes the exact opposite: that there is no fundamental qualitative difference between the components and the full system. The components exactly logically entail the properties of the whole; so they are fundamentally the same kind of thing.
My statement is a philosophical one, related to what I said just above, observing that a sufficient change in complexity enables new qualities (capabilities) not present in the simpler system. You're focused on the Turing equivalence between simple computers and very powerful ones, while I'm making a pragmatic observation about their qualitative capabilities. As I just said above, Turing equivalence tells us absolutely nothing about a computer's advanced intelligence-based skills. As computers and software technology grow increasingly more powerful, we are faced again and again with the situation that Dreyfus faced for the first time in the 1967 chess game with MacHack, that of essentially saying "I'm surprised that a computer could do this". Sometimes even AI researchers are surprised (the venerable Marvin Minsky actually advised Richard Greenblatt, the author of MacHack, not to pursue the project because it was unlikely to be successful). Do you not see why this is relevant in any discussion about computational intelligence, or the possible evolution of machine consciousness?
  #144  
Old 05-22-2019, 12:52 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by eburacum45 View Post
Going back to my laptop- it is only performing one ontologially significant computation in order to display the symbols on its screen; we'll call that f if you like. We know that is the one it is performing, because it is the one it is supposed to do.
So which one is the 'ontologically significant' one in my example above? What makes a computation ontologically significant?

Quote:
This is the teleological function of my computer, the one it has been designed to do. Now you state that it is also performing f', and that also seems to be true- but that computation does not affect the display on the screen at all - the two events are not causally linked.
The computation f' is linked to the device in exactly the same way as f is. I'm not sure how you mean 'causally linked', because what causally determines whether certain lights light up is the way the switches are flipped, but there's no relevant difference between the two.

Quote:
What possible relevance does the computation f' have to anything in the real world?
The relevance is that I can use the device to compute f', in exactly the same way as you can use it to compute f. Even at the same time, actually.

Quote:
Originally Posted by begbert2 View Post
Minds are self-interpreting. That's kind of the whole point - self-awareness, and all. A mind interprets its own memories, it's own data, its own internal states. The 'function', the complex pattern of dominoes that are constantly bumping up against one another, it's arranged in such a way that it examines its own data and interprets it itself.
No matter that nobody knows how such self-interpretation could possibly work, this just doesn't address the issue at all (again). On computationalism, whether a system instantiates a mind depends on what computation it implements. If there's no computation, there's no mind to self-interpret, or conjure up pixies, or what have you. So what computation a system performs needs to be definite before there even is a mind at all. But my argument, it it's right, shows that there exactly isn't any fact of the matter regarding what computation a system performs.

Quote:
Originally Posted by wolfpup View Post
It might, were it not for the fact taking those quotes out of context to rob them of their intended meaning merely reduces the discussion to a game of "I Win teh Internets!".

I liked the Chalmers definition for directly contradicting your claim that an emergent property must have visible elements in the underlying components, a claim that I regarded as nonsense. Reading further in the Chalmers paper, however, I don't agree with him on ALL his characterizations of emergent properties, particularly that what he calls "strong emergence" must be at odds with physicality. So my two statements in context are in no way contradictory, but you get three points and a cookie for highlighting them that way.
The contradiction (as Chalmers highlights) is that the sort of emergence (that doesn't follow from the fundamental-level properties) you require is in contradiction to both computationalism and physicalism, so you simply can't appeal to both in your explanation of the mind without being inconsistent. You want the emergent properties to not follow from the fundamental ones? Then you can't hold on to computationalism. It's that simple.

Quote:
If you're referring to your Mark Bedau quote, that wasn't a cite, it was a cryptic one-liner lacking either context or link.
Sorry, I thought I had given the link earlier (the Chalmers paper, however, cites it---approvingly, I might add).

Quote:
If, in order to support your silly homunculus argument about computation, you have to characterize one of the most respected figures in modern cognitive science as a misguided crackpot for advancing CTM theory and having the audacity to disagree with you, I hope you realize what that does to your argument.
Quote:
What I find disconcerting is that in order to support this argument, you have to discredit arguably one of the most important foundations of cognitive science, with which it is directly at odds.
I like how you get all in a huff about my disagreement with Fodor (whom I never characterized as a crackpot or misguided; at the time, caloric was a perfectly respectable paradigm for the explanation of the movement of heat, simply reflecting an incomplete scientific knowledge, just like the computational theory is now), but yourself think nothing about essentially painting Dreyfus as a reactionary idiot. So I guess the main determining factor in whether or not a dead philosopher is worthy of deference is whether you agree with them?

Quote:
Sure, but that's just the point. The external agent in your switches and lights example is just assigning semantics to the symbols. The underlying computation is invariant regardless of his interpretation, as I keep saying.
Exactly. You keep merely saying that, without any sort of argument whatsoever. So then, at least tell me, which computation does my device implement? Is it f or f'? Is it neither? If so, then how come I can use it to compute the sum of two numbers? If both describe the same computation, then what is it that's being computed? What is that computation? How, in particular, does it differ from merely the evolution of the box as a physical system? You might recall, though one could think you've so far just somehow missed it despite my repeat attempts to point it out, that in that case, computationalism just collapses to identity physicalism.

Quote:
All semantic assignments that work all describe the same computation, and if you dislike me constantly repeating that, another way of saying it is the self-evident fact that no computation is occurring outside the box, and the presence of your interpretive agent doesn't change the box.
Exactly, again. Which means that there's no fact of the matter regarding what the box computes.

Quote:
This is a complete misrepresentation of the significance of Turing's insight.
Turing's insight was precisely that one could reduce the computation of anything to a few simple rules on symbol-manipulation. Anything a computer can do, and ever will do, reduces to such rules; if you know these rules, you can, by rote repeated application, duplicate anything that the computer does. There is thus a clear and simple story from a computer's lower-level operations to its gross behavior. That removes any claim of qualitative novelty for computers.

Again, I'm not the only one thinking that. 'You could do it by computer' is often used as the definition for weak emergence that doesn't introduce anything novel whatsoever, because it's just so blindingly obvious how the large-scale phenomena follow from the lower-level ones in the computer's case.

That doesn't mean computers can't surprise you. Even though the story of how they do what they do is conceptually simple, it can be a bit lengthy, not to mention boring, to actually follow. But surprise is no criterion for qualitative novelty. I have been surprised by my milk boiling over, but that doesn't mean that a qualitatively new feature of the milk emerged.

Quote:
As I just said above, Turing equivalence tells us absolutely nothing about a computer's advanced intelligence-based skills.
It tells us exactly what we need to know: how these skills derive from the lower level properties of the computer. This is what motivated the Church-Turing thesis: Turing showed how simple rote application of rules leads to computing a large class of functions; thus, one may reasonably conjecture that they suffice to compute anything that can be computed at all. Hence, there is a strong heft to the claim that computation emerges from these lower-level symbol manipulations.

You, on the other hand, have provided no such basis for your claim that consciousness emerges in the same way. Indeed, you claim that no basis such as that can be given, because emergence basically magically introduces genuine novelty. That you give computers as an example of that, where it's exactly the case that the emergent properties have 'visible elements in the underlying components', is at the very least ironic.
  #145  
Old 05-22-2019, 12:20 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,456
Quote:
Originally Posted by Half Man Half Wit View Post
No matter that nobody knows how such self-interpretation could possibly work, this just doesn't address the issue at all (again). On computationalism, whether a system instantiates a mind depends on what computation it implements. If there's no computation, there's no mind to self-interpret, or conjure up pixies, or what have you. So what computation a system performs needs to be definite before there even is a mind at all. But my argument, it it's right, shows that there exactly isn't any fact of the matter regarding what computation a system performs.
Are you, or are you not, making the following argument:

1) You have something going on in your head. Nobody knows how it works.

2) "Computation" (whatever that means), is necessary for you to have a mind. If what's going on in there isn't "computation", then it doesn't instantiate a mind and you don't have a mind.

3) Not only does there have to be "computation", but it has to be "definite". Having a materialist causal process that definitely only has one eigenstate is not sufficient to qualify as "definite" - apparently you need to also be possible to be able to unambiguously reverse engineer the internal mechanisms from the outputs alone.

3) Your argument is that the process going on inside your head is in fact not "definite", and thus it's not a qualifying sort of "computation", and thus you haven't got a mind. QED and such.


Is that a fair restatement of your position?


As a side note, I agree that the mental calculation isn't "definite", and I think it could be proven that no calculations whatsoever are "definite". For every black box you might examine, the function could be either "f" or "f but it also is quietly recording its output to an internal log that is never outputted or referred to." You cannot ever prove that this is not happening inside the black box, so no calculation, process, or anything else is "definite".
  #146  
Old 05-22-2019, 12:37 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,877
Quote:
Originally Posted by begbert2 View Post
Are you, or are you not, making the following argument:
I'm not. I'm making the argument I've repeated here more often than I care to count, and won't repeat again. If you don't follow it at some point, I'm happy to help.
  #147  
Old 05-22-2019, 12:44 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,456
Quote:
Originally Posted by Half Man Half Wit View Post
I'm not. I'm making the argument I've repeated here more often than I care to count, and won't repeat again. If you don't follow it at some point, I'm happy to help.
I was sentence-by-sentence restating the post to which I was replying. Which sentence did I restate incorrectly?


The problem with your argument, in case you weren't noticing my subtle reductio ad absurdum, is that to whatever degree your argument applies to theoretical machine intelligences, it also applies equally to human brains. I specifically mentioned your human brain in case you're a solipsist, but the hard truth is that in arguing that no minds are possible anywhere ever.

You are seriously throwing out the baby with the bathwater here.
  #148  
Old 05-22-2019, 12:46 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Quote:
Originally Posted by Half Man Half Wit View Post
The contradiction (as Chalmers highlights) is that the sort of emergence (that doesn't follow from the fundamental-level properties) you require is in contradiction to both computationalism and physicalism, so you simply can't appeal to both in your explanation of the mind without being inconsistent. You want the emergent properties to not follow from the fundamental ones? Then you can't hold on to computationalism. It's that simple.
Only if you believe Chalmers. I proposed above that novel emergent properties can develop in the interconnections and/or states of lower-level components that were not found in any form in the components themselves.

Quote:
Originally Posted by Half Man Half Wit View Post
I like how you get all in a huff about my disagreement with Fodor (whom I never characterized as a crackpot or misguided; at the time, caloric was a perfectly respectable paradigm for the explanation of the movement of heat, simply reflecting an incomplete scientific knowledge, just like the computational theory is now), but yourself think nothing about essentially painting Dreyfus as a reactionary idiot. So I guess the main determining factor in whether or not a dead philosopher is worthy of deference is whether you agree with them?
I note that you're avoiding my cited quote about the important role of computational theory in cognitive science. Also, Dreyfus developed a very bad reputation in the AI and cognitive science communities early on that he was never able to shake, despite apparently some level of vindication of a few of his ideas in later years. I can tell you first-hand that contempt for Dreyfus is still prevalent in those communities.

Quote:
Originally Posted by Half Man Half Wit View Post
Exactly. You keep merely saying that, without any sort of argument whatsoever. So then, at least tell me, which computation does my device implement? Is it f or f'? Is it neither? If so, then how come I can use it to compute the sum of two numbers? If both describe the same computation, then what is it that's being computed? What is that computation?
What is that computation, you ask? Let's ask a hypothetical intelligent alien who happens to know nothing about number systems. The alien's correct answer would be: it's performing the computation that produces the described pattern of lights in response to switch inputs. How do we know that this is a "computation" at all and not just random gibberish? Because it exhibits what Turing called the determinacy condition: for any switch input, there is deterministically a corresponding output pattern. Whether we choose to call it a binary adder or the alien calls it a wamblefetzer is a matter of nomenclature and, obviously, a distinction of utility.

Note that in defining the Turing machine, Turing himself was untroubled by any notion of an external interpreter. Indeed he explicitly made the distinction between this type of machine exhibiting the determinacy condition, which he called an automatic machine or "a-machine", and the choice machine in which an external agent specified the next state. But your box is an a-machine, whose computations involve no such external agent.

Quote:
Originally Posted by Half Man Half Wit View Post
Again, I'm not the only one thinking that. 'You could do it by computer' is often used as the definition for weak emergence that doesn't introduce anything novel whatsoever, because it's just so blindingly obvious how the large-scale phenomena follow from the lower-level ones in the computer's case.

That doesn't mean computers can't surprise you. Even though the story of how they do what they do is conceptually simple, it can be a bit lengthy, not to mention boring, to actually follow. But surprise is no criterion for qualitative novelty. I have been surprised by my milk boiling over, but that doesn't mean that a qualitatively new feature of the milk emerged.
"You could do it by computer" as a synonym for "trivial" sounds like something Dreyfus would have said!

Of course "surprise" by itself isn't a criterion for much of anything, but surprise in the sense that properties that we explicitly denied could emerge from certain processes, like high intelligence or strong problem-solving skills, if and when they actually emerge does mean that we have to re-evaluate our beliefs and assumptions. It also means that those properties were not observed in the underlying processes, or at least were in no way obvious.
Quote:
Originally Posted by Half Man Half Wit View Post
You, on the other hand, have provided no such basis for your claim that consciousness emerges in the same way. Indeed, you claim that no basis such as that can be given, because emergence basically magically introduces genuine novelty. That you give computers as an example of that, where it's exactly the case that the emergent properties have 'visible elements in the underlying components', is at the very least ironic.
None of us are in a position to conclusively explain consciousness. But it certainly seems plausible to me that it's nothing more than our perception of the world turned inward on itself, so that any being sentient enough to have thoughtful perceptions about the world will possess a corresponding level of self-awareness. I suspect that at some point in the future when we have general-purpose AI whose awareness of the world includes awareness of self, we'll get into furious semantic battles over whether machine consciousness is "real" consciousness. It will certainly be different from ours because it won't have the influences of biological senses or instincts. Ultimately Marvin Minsky may be proved right in considering the whole question a relative non-issue in the context of machine intelligence.
  #149  
Old 05-22-2019, 01:11 PM
eschereal's Avatar
eschereal is offline
Guest
 
Join Date: Aug 2012
Location: Frogstar World B
Posts: 16,580
Quote:
Originally Posted by wolfpup View Post
Wrapped up in that statement -- some of which I extrapolate from your earlier claims -- appear to be the beliefs that (a) nothing (or at least nothing of cognitive significance) in the brain is computational, (b) a computer can never simulate a brain, and (c) a computer can never exhibit self-awareness (consciousness). All of which are wrong, in my view, though they are increasingly arguable.
Here is the logical problem, though. Since we do not have a definitive understanding of what self-awareness is, we can only interpret the evidence. That a machine could exhibit self-awareness is only superficial.

I have seen earthworms exhibit strong indications of self-awareness, which leads me to conclude that this thing I have that is the I inside is probably closely related to the fundamental survival instinct of living things. It may be structurally different among the variety of living things, but its evolutionary contribution should be more than obvious.

An elaborate computer may be able to produce all of the apparent indications, but until we have a firm grasp on what that nebulous concept means, we cannot be absolutely certain that it genuinely possesses a property that we know to be self-awareness.

In fact, based on what I have observed with respect to other creatures, it is not at all obvious that self-awareness is an emergent property of intelligence. More complex programming or more elaborate system design may make it seem convincing that a device is self-aware, but it may just be an astoundingly good simulation.

Hell, my HP33C might have had some rudimentary form of self-awareness that was not very similar to mine or to the earthworm's but nonetheless present. Perhaps we ought to be more circumspect when throwing away that old cell phone because it could have had a soul, of sorts.
  #150  
Old 05-22-2019, 01:15 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,456
Quote:
Originally Posted by eschereal View Post
Hell, my HP33C might have had some rudimentary form of self-awareness that was not very similar to mine or to the earthworm's but nonetheless present. Perhaps we ought to be more circumspect when throwing away that old cell phone because it could have had a soul, of sorts.
Every computer with an operating system has at least some form of "self-awareness" in at least the most basic, literal sense of the term. The whole point of an operating system is to keep track of and manage what the computer itself is doing.

The reason we throw away phones isn't because they're not self-aware - it's because we don't care whether or not they are because they're not human. It's the same reason we're okay with eating beef.

Last edited by begbert2; 05-22-2019 at 01:16 PM. Reason: I make so many typos I probably have no mind
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 07:04 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright 2019 STM Reader, LLC.

 
Copyright © 2017