View Single Post
  #98  
Old 05-18-2019, 02:12 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,943
Quote:
Originally Posted by Half Man Half Wit View Post
So suppose the further agency performs some computation M' in oder to fix the brain's computing of M. But then, we need some further agency to fix that it does, in fact, compute M'. And, I hope, you now see the issue: if a computation depends on external facts to be fixed, these facts either have to be non-computational themselves, or we are led to an infinite regression. In either case, computationalism is false.
Well, no, that conclusion is true only if you assume the need for the aforementioned agency, or interpreter, as a prerequisite for computation, a notion that I rejected from the beginning -- a notion that, if true, would undermine pretty much the whole of CTM and most of modern cognitive science along with it.

I read your example but I don't see it as supporting that notion in any way, let alone being a "completely general" conclusion. The problem with your example is that it's a kind of sleight-of-hand where you sneakily change the implied definition of the "computation" that the box is supposed to be performing. The box has only switches and lights. It knows nothing about numbers. So the "computation" it's doing is accurately described either by your first account (binary addition) or the second one, or any other that is consistent with the same switch and light patterns. It makes no difference. They are all exactly equivalent. The fact remains that the fundamental thing that the box is doing doesn't require an observer to interpret, and neither does any computational system. The difference with real computational systems, including the brain, is that there is a very rich set of semantics associated with their inputs and outputs which makes it essentially impossible to play the little game of switcheroo that you were engaging in.

Quote:
Originally Posted by Half Man Half Wit View Post
Quote:
If your argument is that something more profound has happened, well, I would agree that something very profound has happened that is not present in the individual computational modules, but that "something" is called "emergent properties of complexity".
Ah yes, here comes the usual gambit: we can't actually tell what happens, but we're pretty sure that if you smoosh just enough of it together, consciousness will just spark up somehow.
FTR, I don't claim to have solved the problem of consciousness. However, as you well know, emergent properties are a real thing, and if one is hesitant to say "that's why we have consciousness", we can at least say that emergent properties are a very good candidate explanation of attributes like that which appear to exist on a continuum in different intelligent species to an extent that empirically appears related to the level intelligence. They are a particularly good candidate in view of the fact that there is not even remotely any other plausible explanation, other than "mystical soul" or "magic".