View Single Post
  #133  
Old 05-20-2019, 02:25 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,150
Some parting thoughts (maybe).

Quote:
Originally Posted by Half Man Half Wit View Post
Well, you can't have your cake and eat it. Either, the physical facts suffice to determine all the facts about a system: then, there's no strong emergence. Or, they don't: then, physicalism is wrong.
I disagree, but then I probably have a different notion of "emergence" than philosophers like Chalmers. A system can certainly have properties that are not present in any of its components yet are still embodied in its physicality. One simply posits that such properties arise from the arrangement of those components, meaning the interconnections between them, and indeed that's the only place that real emergent properties *can* reside. This arrangement may be by design, or it may be a product of the system's own self-configuration.

This is roughly what happens when logic gates are assembled into a digital computer. The business of being able to "infer" from component properties what the properties of the aggregate system will be is really rather nebulous and arbitrary, and consequently so is the distinction between weak and strong emergence, IMO. One might readily infer that since logic gates switch signals according to logical rules, it's reasonable to expect that the resulting system would be an ace at doing binary arithmetic. But is it reasonable to infer on that same basis that those same logic gates would be the foundation for a system capable of playing grandmaster-level chess, long believed to be the exclusive domain of a high caliber of human intelligence? Or one that would beat Ken Jenning at Jeopardy? If so, Hubert Dreyfus and a whole following of like-minded skeptics sure as hell didn't infer it!


Quote:
Originally Posted by Half Man Half Wit View Post
All computers ever do is to deduce higher-level facts (their behavior) from lower-level facts (their programming). You could print out Watson's machine code, and everything it does follows from those instructions; and, while no human being is likely smart enough to perform the derivation, a sufficiently advanced intellect (think Laplace's demon) would have no trouble at all to predict how Watson reacts in every situation. The very fact that Watson is a computer ensures it to be so, as it entails that there's another computer capable of simulating Watson.

So computationalism can never include strong emergence. That would mean to both believe that a computer could simulate a brain, leading to conscious experience, and that a computer simulation of a brain would lack certain aspects of a real mind (the strongly emergent ones).
The conclusion here, taking into account all that it implies, is so wrong in my view that it doesn't seem sufficient to say that I disagree with it; I respectfully have to say that I'm just astounded that you're saying it. Wrapped up in that statement -- some of which I extrapolate from your earlier claims -- appear to be the beliefs that (a) nothing (or at least nothing of cognitive significance) in the brain is computational, (b) a computer can never simulate a brain, and (c) a computer can never exhibit self-awareness (consciousness). All of which are wrong, in my view, though they are increasingly arguable. But the first of those, if taken seriously, is a flippant dismissal of the entirety of the computational theory of cognition, one that Fodor has described as "far the best theory of cognition that we've got; indeed, the only one we've got that's worth the bother of a serious discussion".

And the basis for your bizarre conclusion appears to be the belief that any computation requires an external agent to fix an interpretation -- a belief that I maintain has already been disproved by the simple fact that all interpretations that are consistent with the computational results are all exactly equivalent. The claim that the mind cannot be computational because of the "external agent" requirement is a futile attempt to parallel the homunculus argument as it's sometimes applied to the theory of vision. Clearly, however, vision is actually a thing, so somewhere along the line the attempt to prove a fallacy has gone off the rails. Likewise with your claim about the computational aspects of cognition. It's exactly the homunculus fallacy, and it's a fallacy because computational results are intrinsically objective -- that is to say, they are no more and no less intrinsically objective than the semantics attached to the symbols.

Your first paragraph here is also frustrating to read. It is, at best, just one step removed from the old saw that "computers can only do what they're programmed to do", which is often used to argue that computers can never be "really" intelligent like we are. That's right, in a way: the reality is that computers can be a lot more intelligent than we are! The fact that in theory a sufficiently advanced intellect or another computer, given all the code and the data structures (the state information) in Watson, could in fact predict exactly what Watson would do in any situation is true, but it's also irrelevant as a counterargument to emergence because it's trivially true: all it says is that Watson is deterministic, and we already knew that.

But here's the kicker: I would posit that exactly the same statement could be made in principle about the human brain. In any given situation and instant in time one could in theory predict exactly how someone will respond to a given stimulus. There's merely a practical difficulty in extracting and interpreting all the pertinent state information. Unless you don't believe that the brain is deterministic -- but that would be an appeal to magic. This is aside from issues of random factors affecting synaptic potential, and changes therein due to changes in biochemistry, and all the other baggage of meat-based logic. But those are just issues of our brains being randomly imperfect. That a defective computer may be unpredictable is neither a benefit nor an argument against computational determinism.

A final point here, for the record, is that in retrospect the digression about strong emergence was a red herring. The kinds of things I was talking about are better described as synergy, which is synonymous with weak emergence, if one wants to bother with the distinction at all. The impressive thing about Watson is not particularly the integration of its various components -- speech recognition, query decomposition, hypothesis generation, etc. -- as these are all purpose-built components of a well-defined architecture. The impressive thing is how far removed the system is from the underlying technology: the machine instructions, and below that, the logic gates inside the processors. The massively parallel platform on which Watson runs is very far removed from a calculator, yet in principle it's built from exactly the same kinds of components.

The principle here is, as I said earlier, that a sufficiently great increase in the quantitative nature of a system's complexity leads to fundamental qualitative changes in the nature of the system. Among other things, dumb, predictable systems can become impressive intelligent problem-solvers. This, in my view, is the kind of emergence that accounts for most of the wondrous properties of the brain, and not some fundamentally different, mysterious processes.

Consciousness is very likely just another point on this continuum, but we've thrashed that one to death. Marvin Minsky used to say that consciousness is overrated -- that what we think of as our awesome power of self-awareness is mostly illusory. Indeed, we obviously have almost no idea at all of what actually goes on inside our own heads. IMHO Chalmers' physicalism arguments about it are just philosophical silliness. Where does consciousness reside? Nowhere. It's just a rather trivial consequence of our ability to reason about the world.

Last edited by wolfpup; 05-20-2019 at 02:25 PM.