View Single Post
Old 05-22-2019, 07:11 PM
wolfpup's Avatar
wolfpup is offline
Join Date: Jan 2014
Posts: 10,693
Originally Posted by Half Man Half Wit View Post
Given the lower level properties and their arrangement, do the emergent properties follow necessarily? If I fix the base facts, do I fix the emergent facts, or are there more facts left to fix?

If the former, you don't get the sort of emergence you claim, because then, the emergent facts follow from the base facts---and for every emergent fact, you can state the precise way it follows from the base facts. Thus, that's the story you need to at least provide some plausibility argument for in order to have your claim of the emergence of consciousness be contentful.

If the latter, then the base facts don't determine the emergent facts. Then, of course, consciousness just might come 'round at some point. Somehow, for basically no reason. But also, it's no longer the case that fixing the physical facts ('particles, fields, and their arrangement') fixes all the facts, and physicalism is wrong.

It's not a matter of believing Chalmers or not. He's not stating this out of the blue; he's just clearly articulating what the options are. There's no middle ground. You either eat the cake, or keep it.
If an artificial neural network like the ones used in AlphaGo starts from essentially zero skill and proceeds to play chess or Go at a championship level after a period of deep learning, where in the original base configuration can you find any "Go-like" strategy knowledge? Trivially, the game rules are built in to the program, and trivially, one might guess that building neural connections might lead to some interesting phenomena, but what could you possibly see in the components in their initial state that could lead you to make confident predictions about skill at that particular game?

It would be fair to ask, for course, what the developers saw in it, and why they built it that way. The answer is that they saw only a general-purpose learning mechanism, not something that bore any of the specific primordial traits of what they hoped to achieve. Just the same way as they built a massively parallel general-purpose computer system to run it on. What actually came together was something qualitatively new, and something that many had believed was at least another decade away.
Originally Posted by Half Man Half Wit View Post
I've never doubted the role of computational theory in cognitive science. The problem is just that the model doesn't imply the character of the thing modeled, so computationalism just amounts to a category error. Just because you can model the solar system with an orrery doesn't mean gravity works via wires and gears.
(Emphasis mine.) Oh, my. You absolutely certainly have done exactly that, many many times throughout this thread:
But if minds then have the capacity to interpret things (as they seem to), they have a capacity that can't be realized via computation, and thus are, on the whole, not computational entities.

Well, I gave an argument demonstrating that computation is subjective, and hence, only fixed by interpreting a certain system as computing a certain function. If whatever does this interpreting is itself computational, then its computation needs another interpretive agency to be fixed, and so on, in an infinite regress; hence, whatever fixes computation can't itself be computational.

The CTM is one of those rare ideas that were both founded an dismantled by the same person (Hilary Putnam). Both were visionary acts, it's just that the rest of the world is a bit slower to catch up with the second one.
Ignoring the incorrect assertions about Putnam that I dispelled earlier, waffling over "category errors" is disingenuous and meaningless here. The position of CTM isn't that computational theories help us understand the mind in some vague abstract sense; the position of CTM is that the brain performs computations, period, full stop -- as in the basic premise that intentional cognitive processes are literally syntactic operations on symbols. This is unambiguously clear, and you unambiguously rejected it. The cite I quoted in #143 says very explicitly that "the paradigm of machine computation" became, over a thirty-year period, a "deep and far-reaching" theory in cognitive science, supporting Fodor's statement that it's hard to imagine any kind of meaningful cognitive science without it, and that denial of this fact -- such as what you appear to be doing -- is not worth a serious discussion.

Originally Posted by Half Man Half Wit View Post
And that's supposed to be arguing for what, exactly? Because computer science people have sour grapes with Dreyfus, you can't criticize computationalism...? Seriously, I can't figure out what your aim is in bringing this up again and again.
It's a response to your accusation that "the main determining factor in whether or not a dead philosopher is worthy of deference is whether you agree with them". No, it isn't. Jerry Fodor was widely regarded as one of the founders of modern cognitive science, or at least of many of its foundational new ideas in the past half-century. Dreyfus wasn't the founder of anything. I asked someone about Dreyfus some years ago, someone who I can say without exaggeration is one of the principal theorists in cognitive science today. I can't give any details without betraying privacy and confidentiality, but I will say this: he knew Dreyfus, and had argumentative encounters with him in the academic media. His charitable view was that Dreyfus was a sort of congenial uncle figure, "good-hearted but not very bright".

Originally Posted by Half Man Half Wit View Post
OK, so it's actually just the physical evolution that you want to call 'computation' for some reason. And me computing sums using the device isn't computation. In that case, as I pointed out, you're not defending computationalism, but identity theory physicalism. Combined with your notion of strong-but-not-really-that-strong emergence, you don't really have any consistent position to offer at all.
I've been absolutely consistent that computationalism and physicalism are not at odds, and I disagree with your premise that they are. Nor do I believe, for the reasons already indicated, that Chalmers' view that what he calls "strong emergence" need be at odds with physicalism. My evolution on this topic is that I'm doubtful that there's much meaningful distinction between "weak" and "strong" emergence.

Originally Posted by Half Man Half Wit View Post
It means we were wrong, that's it. It's hard to see how chess, or Jeopardy, translated to simple Boolean logical rules. But the fact that you can do it by computer simply demonstrates that it does; that what next move to make in a game of chess is equivalent to tracking through some ungodly-huge Boolean formula that one could explicitly write down.
It's worse than that. Even talking about some Boolean formula or algorithm to emulate consciousness is silly because it isn't even a behavior, it's an assertion of self-reflection. The truth of the assertion may or may not become evident after observing actual behaviors. My guess, again, is that the issue is less profound than it's made out to be. It wouldn't surprise me if at some point in the future, generalized strong AI will assert that it is conscious, and we just won't believe it, or will pretend it's a "different kind" of consciousness.