View Single Post
  #143  
Old 05-21-2019, 02:23 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,228
I've been away from the board for the past day due to events of actual life, but let me respond briefly to that last volley.
Quote:
Originally Posted by Half Man Half Wit View Post
So, does it bother you at all that you have to flat out contradict yourself in the span of three posts to try and save your position?
It might, were it not for the fact taking those quotes out of context to rob them of their intended meaning merely reduces the discussion to a game of "I Win teh Internets!".

I liked the Chalmers definition for directly contradicting your claim that an emergent property must have visible elements in the underlying components, a claim that I regarded as nonsense. Reading further in the Chalmers paper, however, I don't agree with him on ALL his characterizations of emergent properties, particularly that what he calls "strong emergence" must be at odds with physicality. So my two statements in context are in no way contradictory, but you get three points and a cookie for highlighting them that way.

Quote:
Originally Posted by Half Man Half Wit View Post
This isn't something I've made up, you know. But I don't think heaping on more cites would help any, seeing how you've already not bothered to address the one I provided.
If you're referring to your Mark Bedau quote, that wasn't a cite, it was a cryptic one-liner lacking either context or link.

Quote:
Originally Posted by Half Man Half Wit View Post
In contrast to you, however, I'm not merely saying it, stating my ideas as if they were just obvious even in the face of widespread disagreement in the published literature, but rather, provide arguments supporting them.
Nor am I "merely saying it". I and others have provided arguments, you just don't like them. And speaking of published literature, if you read the cognitive science literature you'll find that CTM is a rather more substantive contribution to the science than a "theory of caloric". See next comment.

Quote:
Originally Posted by Half Man Half Wit View Post
I'm sure some luminary once described caloric as the best theory of work and heat we've got. But that doesn't mean there's anything to that notion.
If, in order to support your silly homunculus argument about computation, you have to characterize one of the most respected figures in modern cognitive science as a misguided crackpot for advancing CTM theory and having the audacity to disagree with you, I hope you realize what that does to your argument. This about sums it up:
The past thirty years have witnessed the rapid emergence and swift ascendency of a truly novel paradigm for understanding the mind. The paradigm is that of machine computation, and its influence upon the study of mind has already been both deep and far-reaching. A significant number of philosophers, psychologists, linguists, neuroscientists, and other professionals engaged in the study of cognition now proceed upon the assumption that cognitive processes are in some sense computational processes; and those philosophers, psychologists, and other researchers who do not proceed upon this assumption nonetheless acknowledge that computational theories are now in the mainstream of their disciplines.
https://publishing.cdlib.org/ucpress...&brand=ucpress
Quote:
Originally Posted by Half Man Half Wit View Post
I have addressed that issue, conclusively dispelling it: if you consider the computations I proposed to be equivalent, then computationalism just collapses to naive identity physicalism. Besides of course the sheer chutzpah of considering the manifestly different functions I've proposed, which are different on any formalization of computation ever proposed, and which are just quite obviously distinct sorts of operations, to be in any way, shape, or form, 'the same'. The function f' is not binary addition, but it, just as well as addition, is obviously an example of a computation. That I should have to point this out is profoundly disconcerting.
What I find disconcerting is that in order to support this argument, you have to discredit arguably one of the most important foundations of cognitive science, with which it is directly at odds.
Quote:
Originally Posted by Half Man Half Wit View Post
This is a bizarre statement. Quite clearly, the semantics of symbols is explicitly subjective. There is nothing about the word 'dog' that makes it in any sense objectively connect to four-legged furry animals. Likewise, there is nothing about a light that makes it intrinsically mean '1' or '0'.
Sure, but that's just the point. The external agent in your switches and lights example is just assigning semantics to the symbols. The underlying computation is invariant regardless of his interpretation, as I keep saying. All semantic assignments that work all describe the same computation, and if you dislike me constantly repeating that, another way of saying it is the self-evident fact that no computation is occurring outside the box, and the presence of your interpretive agent doesn't change the box.

Quote:
Originally Posted by Half Man Half Wit View Post
Look, I get that the successes of modern computers look impressive. But for anything a computer does, there's a precise story of how this behavior derives from the lower level properties. I might not be able to tell the story of how Watson does what it does, but I know exactly what this story looks like---it looks exactly the same as for a pocket calculator, or my device above. Describing the functional components of a computer enable us to see exactly what computers are able to do. Turing did just that with his eponymous machines; ever since, we have exactly known how any computer does what it does. There's no mystery there.
This is a complete misrepresentation of the significance of Turing's insight. Its brilliance was due the fact that it reduced the notion of "computation" (in the computer science sense) to the simplest and most general abstraction, stripping away all the irrelevancies, and allowed us to distinguish computational processes from non-computational ones. It "enable us to see exactly what computers are able to do" only in the very narrow sense of being state-driven automata that are capable of executing stored programs. It tells us absolutely nothing about the scope and the intrinsic limits of what those automata might be able to achieve in terms of, for example, problem-solving skills demonstrating intelligence at or beyond human levels, self-awareness, or creativity. As I said, philosophers like Dreyfus concluded in the 60s that computers would never be able to play better than a child's level of chess, and AFAIK Searle is still going on about his Chinese Room argument proving that computational intelligence isn't "real" intelligence. Turing machines have been known since 1948, but the debate about computational intelligence rages on in the cognitive and computer sciences.
Quote:
Originally Posted by Half Man Half Wit View Post
You keep claiming this, but any example you give establishes the exact opposite: that there is no fundamental qualitative difference between the components and the full system. The components exactly logically entail the properties of the whole; so they are fundamentally the same kind of thing.
My statement is a philosophical one, related to what I said just above, observing that a sufficient change in complexity enables new qualities (capabilities) not present in the simpler system. You're focused on the Turing equivalence between simple computers and very powerful ones, while I'm making a pragmatic observation about their qualitative capabilities. As I just said above, Turing equivalence tells us absolutely nothing about a computer's advanced intelligence-based skills. As computers and software technology grow increasingly more powerful, we are faced again and again with the situation that Dreyfus faced for the first time in the 1967 chess game with MacHack, that of essentially saying "I'm surprised that a computer could do this". Sometimes even AI researchers are surprised (the venerable Marvin Minsky actually advised Richard Greenblatt, the author of MacHack, not to pursue the project because it was unlikely to be successful). Do you not see why this is relevant in any discussion about computational intelligence, or the possible evolution of machine consciousness?