View Single Post
Old 05-20-2019, 05:41 PM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Join Date: Jun 2007
Posts: 6,885
Originally Posted by wolfpup View Post
I disagree, but then I probably have a different notion of "emergence" than philosophers like Chalmers.
Originally Posted by wolfpup View Post
Now, while I may not agree with David Chalmers on various issues, at least he has his definitions right on strong emergence
So, does it bother you at all that you have to flat out contradict yourself in the span of three posts to try and save your position?

This is roughly what happens when logic gates are assembled into a digital computer. The business of being able to "infer" from component properties what the properties of the aggregate system will be is really rather nebulous and arbitrary, and consequently so is the distinction between weak and strong emergence, IMO.
Well, it's really not, though. I gave the definition (or at least, a very widely accepted definition) above---if you can discover it via simulation, it's (at best) weakly emergent. The reason for this definition is that in such a case, you only need to apply rote operations of logical deduction to get at the 'emergent' properties; so there's nothing new in that sense. The higher-level properties stand to the lower-level properties in a relation of strict logical implementation, or, in other words, there's nothing qualitatively new whatsoever.

This isn't something I've made up, you know. But I don't think heaping on more cites would help any, seeing how you've already not bothered to address the one I provided.

One might readily infer that since logic gates switch signals according to logical rules, it's reasonable to expect that the resulting system would be an ace at doing binary arithmetic. But is it reasonable to infer on that same basis that those same logic gates would be the foundation for a system capable of playing grandmaster-level chess, long believed to be the exclusive domain of a high caliber of human intelligence? Or one that would beat Ken Jenning at Jeopardy? If so, Hubert Dreyfus and a whole following of like-minded skeptics sure as hell didn't infer it!
Dreyfus might have been wrong on some things, but even most proponents of the possibility of strong artificial intelligence today acknowledge that his criticisms against 'good old fashioned AI' (GOFAI) were largely on point. Hence, the move towards subsymbolic and connectionist approaches to replace expert systems and the like.

But that's rather something of a tangent in this discussion. The basic point is that, of course it's reasonable to think of chess playing as being just the same kind of thing as binary arithmetic. After all, that's what a computer program for playing chess is: a reduction of chess playing to performing binary logical operations. Really complicated ones, but again, that's the definition of a difference merely in quantity, not quality.

The conclusion here, taking into account all that it implies, is so wrong in my view that it doesn't seem sufficient to say that I disagree with it; I respectfully have to say that I'm just astounded that you're saying it.
In contrast to you, however, I'm not merely saying it, stating my ideas as if they were just obvious even in the face of widespread disagreement in the published literature, but rather, provide arguments supporting them. Which are then summarily ignored as my interlocutors just flat out state their positions as if they were just obviously true.

But the first of those, if taken seriously, is a flippant dismissal of the entirety of the computational theory of cognition, one that Fodor has described as "far the best theory of cognition that we've got; indeed, the only one we've got that's worth the bother of a serious discussion".
I'm sure some luminary once described caloric as the best theory of work and heat we've got. But that doesn't mean there's anything to that notion.

And the basis for your bizarre conclusion appears to be the belief that any computation requires an external agent to fix an interpretation -- a belief that I maintain has already been disproved by the simple fact that all interpretations that are consistent with the computational results are all exactly equivalent.
I have addressed that issue, conclusively dispelling it: if you consider the computations I proposed to be equivalent, then computationalism just collapses to naive identity physicalism. Besides of course the sheer chutzpah of considering the manifestly different functions I've proposed, which are different on any formalization of computation ever proposed, and which are just quite obviously distinct sorts of operations, to be in any way, shape, or form, 'the same'. The function f' is not binary addition, but it, just as well as addition, is obviously an example of a computation. That I should have to point this out is profoundly disconcerting.

The claim that the mind cannot be computational because of the "external agent" requirement is a futile attempt to parallel the homunculus argument as it's sometimes applied to the theory of vision. Clearly, however, vision is actually a thing, so somewhere along the line the attempt to prove a fallacy has gone off the rails.
The homunculus argument succeeds in pointing out a flaw with certain simple representational theories of vision, which have consequently largely been discarded. Pointing out the occurrence of vicious infinite regresses is a common tool in philosophy, and all I'm doing is pointing out that it trivializes the computational theory of mind.

It's exactly the homunculus fallacy, and it's a fallacy because computational results are intrinsically objective -- that is to say, they are no more and no less intrinsically objective than the semantics attached to the symbols.
This is a bizarre statement. Quite clearly, the semantics of symbols is explicitly subjective. There is nothing about the word 'dog' that makes it in any sense objectively connect to four-legged furry animals. Likewise, there is nothing about a light that makes it intrinsically mean '1' or '0'.

But here's the kicker: I would posit that exactly the same statement could be made in principle about the human brain.
Sure. That's not much of a kicker, though. After all, it's just a restatement of the notion that there's no strong emergence in the world.

Unless you don't believe that the brain is deterministic -- but that would be an appeal to magic.
It doesn't really matter, but indeterminism doesn't really entail 'magic'. On many of its interpretations, quantum mechanics is intrinsically indeterministic; that doesn't make it any less of a perfectly sensible physical theory.

The kinds of things I was talking about are better described as synergy, which is synonymous with weak emergence, if one wants to bother with the distinction at all. The impressive thing about Watson is not particularly the integration of its various components -- speech recognition, query decomposition, hypothesis generation, etc. -- as these are all purpose-built components of a well-defined architecture. The impressive thing is how far removed the system is from the underlying technology: the machine instructions, and below that, the logic gates inside the processors. The massively parallel platform on which Watson runs is very far removed from a calculator, yet in principle it's built from exactly the same kinds of components.
And that's not surprising in any way, because it's doing qualitatively exactly the same kind of thing---just more of it.

Look, I get that the successes of modern computers look impressive. But for anything a computer does, there's a precise story of how this behavior derives from the lower level properties. I might not be able to tell the story of how Watson does what it does, but I know exactly what this story looks like---it looks exactly the same as for a pocket calculator, or my device above. Describing the functional components of a computer enable us to see exactly what computers are able to do. Turing did just that with his eponymous machines; ever since, we have exactly known how any computer does what it does. There's no mystery there.

That's the sort of story you'd have to tell to make your claim regarding the emergence of consciousness have any sort of plausibility. But instead, you're doing the exact opposite: you try to use complexity to hide, not to elucidate, how consciousness works. You basically say, we can't tell the full story, so we can't tell any story at all, so who knows, anything might happen really, even consciousness. It's anyone's guess!

The principle here is, as I said earlier, that a sufficiently great increase in the quantitative nature of a system's complexity leads to fundamental qualitative changes in the nature of the system.
You keep claiming this, but any example you give establishes the exact opposite: that there is no fundamental qualitative difference between the components and the full system. The components exactly logically entail the properties of the whole; so they are fundamentally the same kind of thing.

Nevermind that even if there were some sort of qualitative difference, this would still not make any headway at all against my argument (that, I'll just point out once again for the fun of it, you still haven't actually engaged with)---at best, the resulting argument would be something like: the simple example system doesn't possess any fixed computational interpretation; however, qualitatively novel phenomena emerge once we just smoosh more of that together. So maybe some of these qualitatively novel phenomena are just gonna solve that problem in some way we don't know.

That is, even if you were successful in arguing for qualitative novelty in large-scale computational systems, the resulting argument would at best be a fanciful hope.

Originally Posted by begbert2 View Post
Pretty much. My argument is that the brain can interpret its own symbols, and an exact copy of that brain can interpret those same symbols with the same effects.
And that's already where things collapse. If the interpretation of this symbols is based on computation, then the symbols must already be interpreted beforehand, or otherwise, there just won't be any computation to interpret them.

Why does the symbol '3' refer to the number three? It just does. It's the symbol we've collectively chosen.
Sure. But the question is, how does this choosing work? How does one physical vehicle come to be about, or refer to, something beyond itself? In philosophical terms, this is the question of intentionality---the mind's other problem.

A given mind, a given cognition, knows what it means by a given symbol/encoding. Let's call that cognition M. M happens to function in such a way that red tennis shoes are encoded with a specific code which I'll refer to as RTS.
The problem is, rather, that M is tasked with interpreting the symbols that make the brain compute M. Consequently, the brain must be computing M before M can interpret the brain as computing M. Do you see that this is slightly problematic?

Of course it's possible, theoretically speaking. To say otherwise is absurd, because theoretically speaking you can emulate a model of reality itself (or at least a local chunk of it) within the computer.
Well, that's just massively question-begging. In point of fact, nothing ever just computes anything; systems are interpreted as computing something. I mean, I've by now gotten used to people just ignoring the actual arguments I make in favor of posturing and making unsubstantiated claims, but go back to the example I provided. There, two different computations are attributed, on equivalent justification, to one and the same physical system. Thus, what computation is performed isn't an objective property of the system anymore than the symbol '3' denoting a certain number is a property of that symbol.

Which means that, theoretically speaking, you absolutely can create an exact physical replica of your brain within the simulation within the computer. So you bet your bunions that your cognition is digitizable.
No. You can interpret certain systems as implementing a simulation of a brain. That doesn't mean the system actually is one. You can interpret an orrery as a model of the solar system. That doesn't mean it actually is a solar system.

All of this is just a massive case of confusing the map for the territory. What you're saying is exactly equivalent, for example, to saying that there are certain symbols such that they have only one objectively correct meaning.