View Single Post
  #182  
Old 05-22-2019, 11:48 PM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,820
Quote:
Originally Posted by begbert2 View Post
I don't know what you mean by "identical". If you mean "identical at the level of which electron is moving down which wire at the same time", then you can't be implementing both f and f' at the same time and having them both producing the same single output. It's literally impossible.
And yet, it's literally what happens, as I showed by example! Of course, that example appears to be, unfortunately, invisible to you, or else I can't really explain your stubborn refusal to just go look at it.

Quote:
So what you're saying is that if everyone who knows about a computer dies the computer magically stops working because it depended on the "interpretation" (which only human brains can do) knowing about it to function.
Again, none of this has any relation to my argument.

Quote:
Originally Posted by begbert2 View Post
That's less a disproof of my argument and more a disproof of IIT - or your interpretation if IIT, as the case may be.
Well, it at least demonstrates admirable confidence that you think you understand IIT better than its founders, with nothing but a cursory glance!

Regardless, this is not my interpretation of IIT, but one of its core points. Take it from Christoph Koch:

Quote:
If I build a perfect software model of the brain, it would never be conscious, but a specially designed machine that mimics the brain could be?

Correct. This theory clearly says that a digital simulation would not be conscious, which is strikingly different from the dominant functionalist belief of 99 percent of people at MIT or philosophers like Daniel Dennett. They all say, once you simulate everything, nothing else is required, and itís going to be conscious.
Quote:
Originally Posted by wolfpup View Post
If an artificial neural network like the ones used in AlphaGo starts from essentially zero skill and proceeds to play chess or Go at a championship level after a period of deep learning, where in the original base configuration can you find any "Go-like" strategy knowledge?
Nowhere, but given the base configuration, it's completely clear how one could teach it Go, which will change its base configuration (altering neuronal connection weights via backpropagation or some similar process), and after which the new base configuration in every case forms a sufficient explanation of AlphaGo's Go-playing capacity, without any need to appeal to emergence whatsoever.

You can exactly prove what sort of tasks a neural network is capable of learning (arbitrary functions, basically), you can, at every instant, tell exactly what happens at the fundamental level in order for that learning to happen, and you can tell exactly how it performs at a given task solely by considering its low-level functioning. This is an exact counter-example to the claims you're making. For a good explanation of the process, you might be interested in the 'Deep Learning'-series on 3Blue1Brown.

The people who built AlphaGo didn't just do it for fun, to see how it would do. They knew exactly what was going to happen---qualitatively, although I'd guess they weren't exactly sure how good it was going to get---and the fact that they could have this confidence stands in direct opposition to your claims. Nobody just builds a machine to see what's going to happen; they build one precisely because their understanding of the components allows them to say so with pretty good confidence. Sure, there's no absolute guarantee---but as you said, surprise isn't a sufficient criterion for emergence. Sometimes a bridge breaks down to the surprise of everybody; that doesn't entail that bridges have any novel qualitative features over and above those of bricks, cement, and steel.

Quote:
(Emphasis mine.) Oh, my. You absolutely certainly have done exactly that, many many times throughout this thread:
I have also emphasized that I think computational modeling is effective and valuable, which you conveniently missed. The one claim that I take issue with is that the brain is wholly computational in nature, and that, in particular, consciousness is a product of computation. That's however a claim that cognitive science can proceed without making, and whose truth has no bearing on its successes so far.

You've missed it at least two times now, and you'll probably miss it a third time, but again: an orrery is a helpful instrument to model the solar system, and one might get it into one's head that the solar system itself must be some giant world-machine run on springs and gears; but the usefulness of the orrery is completely independent of the falsity of that assertion.

Quote:
This is unambiguously clear, and you unambiguously rejected it. The cite I quoted in #143 says very explicitly that "the paradigm of machine computation" became, over a thirty-year period, a "deep and far-reaching" theory in cognitive science, supporting Fodor's statement that it's hard to imagine any kind of meaningful cognitive science without it, and that denial of this fact -- such as what you appear to be doing -- is not worth a serious discussion.
Even Fodor, as you noted, didn't believe the mind is wholly computational. That's the whole point of The Mind Doesn't Work That Way: Computational theory is a 'large fragment' of the truth, but doesn't suffice to tell the whole story (in particular, abduction, I think, is one thing Fodor thinks the mind can do that computers can't). So leaning onto Fodor to support your assertion that computationalism is 'the only game in town' isn't a great strategy.

Quote:
It's a response to your accusation that "the main determining factor in whether or not a dead philosopher is worthy of deference is whether you agree with them". No, it isn't. Jerry Fodor was widely regarded as one of the founders of modern cognitive science, or at least of many of its foundational new ideas in the past half-century. Dreyfus wasn't the founder of anything. I asked someone about Dreyfus some years ago, someone who I can say without exaggeration is one of the principal theorists in cognitive science today. I can't give any details without betraying privacy and confidentiality, but I will say this: he knew Dreyfus, and had argumentative encounters with him in the academic media. His charitable view was that Dreyfus was a sort of congenial uncle figure, "good-hearted but not very bright".
I'm not going to defend Dreyfus here, but you have been given a seriously one-sided view. Dreyfus was an early proponent of embodied cognition views that are increasingly gaining popularity, and much of his critique on GOFAI is now even acknowledged by AI researchers to have largely been on point.

Quote:
I've been absolutely consistent that computationalism and physicalism are not at odds, and I disagree with your premise that they are.
It's not a premise, it's a conclusion. I've asked you a couple of questions about the connection between the fundamental-level facts and the emergent facts in my previous post, which you neglected to answer, because the answers expose what should really be clear by now: either, the emergent properties are entailed by the base properties---then, the sort of emergence you claim consciousness to have doesn't exist, and I'm right in arguing that you owe us some sort of inkling as to how consciousness may emerge from lower-level facts, as for instance given in IIT (see my example above). Or, there is genuine novelty in emergence; but then, specifying the fundamental, physical facts doesn't suffice to fix all the facts about the universe, and thus, physicalism is wrong.

Which is a separate issue from the fact that such emergence, obviously, doesn't occur in computers, which are the poster children of reducibility.

I'm still waiting for you to tell me what my example system computes, by the way. I mean, this is usually a question with a simple answer, or so I'm told: a calculator computes arithmetical functions; a chess computer chess moves. So why's it so hard in this case? Because, of course, using the same standard you use in everyday computations will force you to admit that it's just as right to say that the device computes f as that it computes f'. And there, to any reasonable degree, the story ends.