View Single Post
  #199  
Old 05-23-2019, 04:39 PM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by begbert2 View Post
If you're talking about your examples in posts 18 and 93, then you literally don't know what you're talking about. Your back-assward argument is that given a closed calculation machine, a given input, and the output from it, you won't be able to unambiguously infer the internal process of the machine.
No. Not even close. I haven't said anything about internal processes at all, they've got no bearing or relevance on my argument. The argument turns on the fact that you can interpret the inputs (switches) and outputs (lights) as referring to logical states ('1' or '0') in different ways. Thus, the system realizes different functions from binary numbers to binary numbers. I made this very explicit, and frankly, I can't see how you can honestly misconstrue it as being about 'internal processes', 'black boxes' and the like.

Quote:
So. While you think that your example includes a heisenberg uncertainty machine that has schroedinger's internals which simultaneously implement f and f' and thus hold varying internal states at the same time, in actual, non-delusional fact if you have a specific deterministic machine that has a specific internal state that means that it *must* be in the middle of implementing either f or f', and not both. This remains true regardless of the fact that you can't tell which it's doing from eyeballing the output. Obviously.
OK. So, the switches are set to (down, up, up, down), and the lights are, consequently, (off, on, on). What has been computed? f(1, 2) ( = 1 + 2) = 3, or f'(2, 1) = 6? You claim this is obvious. Which one is right?

The internal wiring is wholly inconsequential; all it needs to fulfill is to make the right lights light up if the switches are flipped. There are various ways to do so, if you feel it's important, just choose any one of them.

Quote:
Your argument is that the "interpretation" of an outside observer has some kind of magical impact on the function of a calculative process, despite there being no possible way that the interpreter's observation of the outside of the black box can impact what's going on inside it.
Because what goes on inside has no bearing on the way the system is interpreted. You can think of it in the same way as reading a text: how it was written, by ink on paper, by pixels on a screen, by chalk on a board, has no bearing on whether you can read it, and what message gets transported once you do. Your language competence, however, does: where you read the word 'gift', and might expect some nice surprise, I read it as promising death and suffering, because it means 'poison' in German. In the same way---exactly the same way---one can read 'switch up' to mean '0' or '1'. And that's all there's to it.

Quote:
Or at least that's what you've been repeatedly saying your argument is. I can only work with what you give me.
Evidently not, to both our detriment.

Quote:
I note that while he baldly asserts that simulations can't be conscious, the only reason he gives for this is that physical matter is magic.
I'm not going to defend IIT here, but it's a very concrete proposal (much more concrete than anything offered in this thread so far) that's squarely rooted in the physical.

Quote:
So take heart! You're not the only person making stupid nonsensical arguments. You're not alone in this world!
Well, at least now I know it's not just my fault that my arguments seem so apparently opaque to you.


Quote:
Originally Posted by begbert2 View Post
P1: Cognition is a property or behavior that exists in the physical world.

P2: If an emulation is sufficiently detailed and complete, that emulation can exactly duplicate properties and behaviors of what it's emulating.

P3: It's possible to create a sufficiently detailed and complete emulation of the real world.

C1: It's possible to create an emulation that can exactly duplicate properties and behaviors of the real world. (P2 and P3)

C2: It's possible to create an emulation that can exactly duplicate cognition. (C1 and P1)
Premise P2 is self-evidently wrong: if an emulation could exactly duplicate every property of a system, then it wouldn't be an emulation, but merely a copy, as there would be no distinction between it and what it 'emulates'. But of course, no simulation ever has all the properties of the thing it simulates---after all, that's why we do it: we typically have more control over the simulation. For instance, black holes are, even if we could get to them, quite difficult to handle, but simulations are perfectly tame---because a simulated black hole doesn't have the mass of a real one. I can simulate black holes all the live long day without my desk ever collapsing into the event horizon.

You'll probably want to argue that 'inside the simulation', objects are attracted by the black hole, thus, it has mass. For one, that's a quite strange thing to believe: it would entail that you could create some sort of pocket-dimension, with its own physics removed from ours, merely by virtue of shuffling around a few voltages; that it would be the case, even though the black hole's mass has no effects in our dimension, there suddenly now exists a separate realm where mass exists that has no connection to ours save for your computer screen. In any other situation, you'd call that 'magic'.

Holding that the black hole in the simulation has mass is exactly the same thing as holding that the black hole I'm writing about has mass. The claim that computation creates consciousness is the claim that, whenever I'm writing 'john felt a pain in his hip', there is actually a felt pain somewhere, merely by virtue of me describing it. Because that's what a simulation is: an automated description. A computation is a chain of logical steps, equivalent to an argument, performed mechanically; there's no difference to writing down the same argument in text. The next step in the computation follows from the previous one in just the same way as the next line in an argument follows from the prior one.

Quote:
Originally Posted by wolfpup View Post
This is flat-out wrong, as evidenced by Fodor's statements that CTM is an indispensably essential theory explaining many aspects of cognition, while at the same time he never even imagined that anyone would take it to be a complete description of everything the mind does. Your characterization of the computational theory of mind is simply wrong in that it fundamentally misrepresents how CTM has been defined and applied in cognitive science.
Come on, now. You're fully aware that the core claim of CTM is, as wikipedia puts it,
Quote:
Originally Posted by wikipedia
In philosophy, the computational theory of mind (CTM) refers to a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation.


Fodor indeed had heterodox views on the matter; but, while he's an important figure, computationalism isn't just what Fodor says it is. After all, it's the 'computational theory of the mind', not 'of some aspects of the mind'. Or, as your own cite from the SEP says,
Quote:
Originally Posted by SEP
Advances in computing raise the prospect that the mind itself is a computational system—a position known as the computational theory of mind (CTM).


Quote:
I see a "blank space" where you provide your ruminations about CTM being somehow related to "computational modeling" because it's so egregiously wrong.
I'm not saying that the CTM is related to computational modeling, I'm saying that computational modeling is useful in understanding the brain even if the mind is not wholly computational. For instance, a computational model of vision need not assume that the mind is computational to give a good description of vision.

Quote:
Because it's wrong, for the reason cited above.
If you intend this to mean that my argument is wrong just because Fodor (or his allies) don't hold to it, then that's nothing but argument from authority. You'll have to actually find a flaw in the argument to mount a successful attack.

Quote:
I appreciate the effort you made to once again detail your argument, but I find the view that there is some kind of fundamental difference between an abstract Turing machine and a physical one because the former manipulates abstract symbols and the latter manipulates physical representations to be incoherent.
There is, really, only one set of questions you need to answer: if I use my device to compute the sum of two inputs, what is the device doing? Is it computing the sums? If not, then what is? If I use it to compute f', what is the device doing?

Because that's computation in the actual, real-world sense of the term, absent any half-digested SEP articles. My claim is nothing but: because I can use the system to compute f (or f'), the system computes f (or f'). There is nothing difficult about this, and it needs loops of motivated reasoning in order to make it into anything terribly complex or contentious.

Quote:
Let me re-iterate one of my previous comments. Our disagreement seems to arise from your conflation of "computation" with "algorithm". The question of what a "computation" is, in the most fundamental sense, is quite a different question from what problem is being solved by the computation. Your obsession with the difference between your f and f' functions is, at its core, not a computational issue, but a class-of-problem issue.
I've explicitly left the algorithm that computes either function out of consideration. The computation is given by the function; the algorithm is the way the function is computed, the detailed series of steps being traversed. I can talk about a system implementing a computation without talking about the algorithm it follows. Again, that's just the everyday usage of the term: I can say that a certain program sorts objects, without even knowing whether it implements mergesort, or quicksort, or bubblesort, or what have you.

So no, the algorithm has no bearing on whether the system implements f or f'. The reinterpretation of the symbolic vehicles it uses entails that any algorithm for computing one will be transformed into one for computing the other. Where it says 'If S12 = '1' Then ...' in one, it'll say 'If S12 = '0' Then ...' in the other, with both referring to the switch being in the 'up' position, say.