View Single Post
  #144  
Old 05-21-2019, 11:52 PM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,856
Quote:
Originally Posted by eburacum45 View Post
Going back to my laptop- it is only performing one ontologially significant computation in order to display the symbols on its screen; we'll call that f if you like. We know that is the one it is performing, because it is the one it is supposed to do.
So which one is the 'ontologically significant' one in my example above? What makes a computation ontologically significant?

Quote:
This is the teleological function of my computer, the one it has been designed to do. Now you state that it is also performing f', and that also seems to be true- but that computation does not affect the display on the screen at all - the two events are not causally linked.
The computation f' is linked to the device in exactly the same way as f is. I'm not sure how you mean 'causally linked', because what causally determines whether certain lights light up is the way the switches are flipped, but there's no relevant difference between the two.

Quote:
What possible relevance does the computation f' have to anything in the real world?
The relevance is that I can use the device to compute f', in exactly the same way as you can use it to compute f. Even at the same time, actually.

Quote:
Originally Posted by begbert2 View Post
Minds are self-interpreting. That's kind of the whole point - self-awareness, and all. A mind interprets its own memories, it's own data, its own internal states. The 'function', the complex pattern of dominoes that are constantly bumping up against one another, it's arranged in such a way that it examines its own data and interprets it itself.
No matter that nobody knows how such self-interpretation could possibly work, this just doesn't address the issue at all (again). On computationalism, whether a system instantiates a mind depends on what computation it implements. If there's no computation, there's no mind to self-interpret, or conjure up pixies, or what have you. So what computation a system performs needs to be definite before there even is a mind at all. But my argument, it it's right, shows that there exactly isn't any fact of the matter regarding what computation a system performs.

Quote:
Originally Posted by wolfpup View Post
It might, were it not for the fact taking those quotes out of context to rob them of their intended meaning merely reduces the discussion to a game of "I Win teh Internets!".

I liked the Chalmers definition for directly contradicting your claim that an emergent property must have visible elements in the underlying components, a claim that I regarded as nonsense. Reading further in the Chalmers paper, however, I don't agree with him on ALL his characterizations of emergent properties, particularly that what he calls "strong emergence" must be at odds with physicality. So my two statements in context are in no way contradictory, but you get three points and a cookie for highlighting them that way.
The contradiction (as Chalmers highlights) is that the sort of emergence (that doesn't follow from the fundamental-level properties) you require is in contradiction to both computationalism and physicalism, so you simply can't appeal to both in your explanation of the mind without being inconsistent. You want the emergent properties to not follow from the fundamental ones? Then you can't hold on to computationalism. It's that simple.

Quote:
If you're referring to your Mark Bedau quote, that wasn't a cite, it was a cryptic one-liner lacking either context or link.
Sorry, I thought I had given the link earlier (the Chalmers paper, however, cites it---approvingly, I might add).

Quote:
If, in order to support your silly homunculus argument about computation, you have to characterize one of the most respected figures in modern cognitive science as a misguided crackpot for advancing CTM theory and having the audacity to disagree with you, I hope you realize what that does to your argument.
Quote:
What I find disconcerting is that in order to support this argument, you have to discredit arguably one of the most important foundations of cognitive science, with which it is directly at odds.
I like how you get all in a huff about my disagreement with Fodor (whom I never characterized as a crackpot or misguided; at the time, caloric was a perfectly respectable paradigm for the explanation of the movement of heat, simply reflecting an incomplete scientific knowledge, just like the computational theory is now), but yourself think nothing about essentially painting Dreyfus as a reactionary idiot. So I guess the main determining factor in whether or not a dead philosopher is worthy of deference is whether you agree with them?

Quote:
Sure, but that's just the point. The external agent in your switches and lights example is just assigning semantics to the symbols. The underlying computation is invariant regardless of his interpretation, as I keep saying.
Exactly. You keep merely saying that, without any sort of argument whatsoever. So then, at least tell me, which computation does my device implement? Is it f or f'? Is it neither? If so, then how come I can use it to compute the sum of two numbers? If both describe the same computation, then what is it that's being computed? What is that computation? How, in particular, does it differ from merely the evolution of the box as a physical system? You might recall, though one could think you've so far just somehow missed it despite my repeat attempts to point it out, that in that case, computationalism just collapses to identity physicalism.

Quote:
All semantic assignments that work all describe the same computation, and if you dislike me constantly repeating that, another way of saying it is the self-evident fact that no computation is occurring outside the box, and the presence of your interpretive agent doesn't change the box.
Exactly, again. Which means that there's no fact of the matter regarding what the box computes.

Quote:
This is a complete misrepresentation of the significance of Turing's insight.
Turing's insight was precisely that one could reduce the computation of anything to a few simple rules on symbol-manipulation. Anything a computer can do, and ever will do, reduces to such rules; if you know these rules, you can, by rote repeated application, duplicate anything that the computer does. There is thus a clear and simple story from a computer's lower-level operations to its gross behavior. That removes any claim of qualitative novelty for computers.

Again, I'm not the only one thinking that. 'You could do it by computer' is often used as the definition for weak emergence that doesn't introduce anything novel whatsoever, because it's just so blindingly obvious how the large-scale phenomena follow from the lower-level ones in the computer's case.

That doesn't mean computers can't surprise you. Even though the story of how they do what they do is conceptually simple, it can be a bit lengthy, not to mention boring, to actually follow. But surprise is no criterion for qualitative novelty. I have been surprised by my milk boiling over, but that doesn't mean that a qualitatively new feature of the milk emerged.

Quote:
As I just said above, Turing equivalence tells us absolutely nothing about a computer's advanced intelligence-based skills.
It tells us exactly what we need to know: how these skills derive from the lower level properties of the computer. This is what motivated the Church-Turing thesis: Turing showed how simple rote application of rules leads to computing a large class of functions; thus, one may reasonably conjecture that they suffice to compute anything that can be computed at all. Hence, there is a strong heft to the claim that computation emerges from these lower-level symbol manipulations.

You, on the other hand, have provided no such basis for your claim that consciousness emerges in the same way. Indeed, you claim that no basis such as that can be given, because emergence basically magically introduces genuine novelty. That you give computers as an example of that, where it's exactly the case that the emergent properties have 'visible elements in the underlying components', is at the very least ironic.