Reply
 
Thread Tools Display Modes
  #201  
Old 05-23-2019, 06:38 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,360
Quote:
Originally Posted by Half Man Half Wit View Post
No. Not even close. I haven't said anything about internal processes at all, they've got no bearing or relevance on my argument. The argument turns on the fact that you can interpret the inputs (switches) and outputs (lights) as referring to logical states ('1' or '0') in different ways. Thus, the system realizes different functions from binary numbers to binary numbers. I made this very explicit, and frankly, I can't see how you can honestly misconstrue it as being about 'internal processes', 'black boxes' and the like.


OK. So, the switches are set to (down, up, up, down), and the lights are, consequently, (off, on, on). What has been computed? f(1, 2) ( = 1 + 2) = 3, or f'(2, 1) = 6? You claim this is obvious. Which one is right?
Is there an isomorphic mapping between interpretations? In your inputs there seems to be. But not in your output, since otherwise f'(2,1) would be 4. We'd have to see the entire mapping to know for sure, though, since a mapping that just substitutes 6 for 4 and 4 for 6 would be fine.
If you allow inconsistent mappings, then you can't say anything about anything.
If we knew the internals, then we could tell if they are equivalent - usually. But in this example it is impossible to know.
  #202  
Old 05-23-2019, 06:50 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by begbert2 View Post
You're really, really coming off a somebody who doesn't understand simulations, here. I'm not really sure how to explain simulations in brief, so I'll just say "I accept that you think you've refuted P2, but you really, really haven't." Suffice to say that writing "john felt a pain in his hip" is not a particularly detailed and complete emulation.
But wouldn't you agree that no matter how good the simulation of a black hole, there is no actual gravity produced in the real world, just a bunch of numbers that happen to map to the same set of transitions that the measured particles of a black hole would go through if we could measure them?
  #203  
Old 05-24-2019, 01:10 AM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,360
Quote:
Originally Posted by RaftPeople View Post
But wouldn't you agree that no matter how good the simulation of a black hole, there is no actual gravity produced in the real world, just a bunch of numbers that happen to map to the same set of transitions that the measured particles of a black hole would go through if we could measure them?
Do you accept that a simulation of a computer running a program produces the same results as the actual computer running the program? That's a more relevant example here.
  #204  
Old 05-24-2019, 02:45 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by begbert2 View Post
Your "thus" is flat wrong and stupid. When the box in your original argument transformed its input into its output it used a specific approach to do so. It didn't use all theoretically possible approaches to do so; it used one approach to do so. It doesn't use "different functions" to map the inputs to the result, it uses only one function to do so. Which function? Whichever one it used. You can't tell which from the outside, but frankly reality doesn't give crap what you know.
This is ridiculous. I don't have to know anything about what a computer does internally in order to check what it computes. I know that my calculator computes the square root, because if I enter a number, punch the SQRT-button, and read off the result, that result will be the square root of the number. What goes on on the inside is just a means to achieving that end; what means is used is wholly irrelevant.

But fine. Don't let it be said I'm not doing my best to be maximally accommodating. First, for convenience, here's my box again:

Code:
 -----------------------------
|                             |
|  (S11)(S12)                 |
|                (L1)(L2)(L3) |
|  (S21)(S22)                 |
|                             |
 -----------------------------

Now, here's the complete wiring diagram of my box, together with an explicit discussion of every possible case (I'm home sick at the moment, and evidently have too much time on my hands):
Code:
        ___
S12--+-|   |                                                 
     | | A1|-----------------------------------------X L3            
S22-+--|___|
    ||            ___
    ||___________|   |
    |____________| B1|--+
                 |___|  |
                        |
        ___             |   ___
S11--+-|   |-----+---------|   |
     | | A2|     |      |  | A3|---------------------X L2
S21-+--|___|     |      +--|___|
    ||           |      |           ___
    ||           |      |__________|   |
    ||           |_________________| B3|--+
    ||                             |___|  |
    ||            ___                     |   ___
    ||___________|   |                    +--|   |
    |____________| B2|-----------------------| C1|---X L1
                 |___|                       |___|



      S11 S12 | S21 S22 || L1 L2 L3
      -----------------------------
       d   d  |  d   d  || x  x  x
       d   d  |  d   u  || x  x  o        (A1 yields h --> L3 o)
       d   d  |  u   d  || x  o  x        (A2 yields h, A3 yields h --> L2 o)
       d   d  |  u   u  || x  o  o        (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
       d   u  |  d   d  || x  x  o        (A1 yields h --> L3 o)
       d   u  |  d   u  || x  o  x        (B1 yields h, A3 yields h, L2 o)
       d   u  |  u   d  || x  o  o        (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
       d   u  |  u   u  || o  x  x        (B1 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
       u   d  |  d   d  || x  o  x        (A2 yields h, A3 yields h --> L2 o)
       u   d  |  d   u  || x  o  o        (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
       u   d  |  u   d  || o  x  o        (A1 yields h --> L3 o, B2 yields h, C1 yields h --> L1 o)        
       u   d  |  u   u  || o  o  x        (B1 yields h, A3 yields h --> L2 o, B2 yields h, C1 yields h --> L1 o)
       u   u  |  d   d  || x  o  o        (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
       u   u  |  d   u  || o  x  x        (B2 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
       u   u  |  u   d  || o  x  o        (A1 yields h --> L3 o, B2 yields h, C1 yields h --> L1 o)
       u   u  |  u   u  || o  o  x        (B1 yields h, A3 yields h --> L2 o, B2 yields h, C1 yields h --> L1 o(
In the above, lines are wires, '+' are wire joins, S11-S22 are the switches, and L1-L3 are the lamps. A switch, if flipped up (u), emits a high voltage signal 'h'; in the state down (d), it emits low voltage 'l'. A box of the type A emits a high-voltage signal 'h' if either one, but not both, the wires connecting to it on the left side carries 'h'. A box of type B emits 'h' if and only if both of the wires connecting to it on the left carry 'h'. Finally, a box of type C emits 'h' if one or both of the wires connecting to it carry 'h'. If a lamp receives 'h', it lights up (o), if not, it remains dark (x).

Now, the physical behavior of the box is completely specified (I trust you don't need me specifying the internal wiring of the A-C boxes, as well---although it would be trivial to do so). The table lists the light-patterns for any given switch-pattern, as well as giving the reason for why each lamp is on (it's a trivial matter I did not bother with to give the corresponding reason why a lamp remains dark).

Quote:
Originally Posted by Voyager View Post
Is there an isomorphic mapping between interpretations? In your inputs there seems to be. But not in your output, since otherwise f'(2,1) would be 4. We'd have to see the entire mapping to know for sure, though, since a mapping that just substitutes 6 for 4 and 4 for 6 would be fine.
If you allow inconsistent mappings, then you can't say anything about anything.
If we knew the internals, then we could tell if they are equivalent - usually. But in this example it is impossible to know.
I've given the mappings in my post #93, but for convenience, let me repeat them here.

So, first, take 'u' to mean '1', 'd' to mean '0', 'o' to mean '1', and 'x' to mean '0'. Then, interpret the first two switches, S11 and S12, as one binary number, switches S21 and S22 as another, and L1-L3 as a three-bit binary number. Then, the table above becomes:

Code:
x1 |  x2   ||   f(x1, x2)
-----------------------
0  |  0    ||       0
0  |  1    ||       1
0  |  2    ||       2
0  |  3    ||       3
0  |  0    ||       0
1  |  1    ||       2
1  |  2    ||       3
1  |  3    ||       4
2  |  0    ||       2
2  |  1    ||       3
2  |  2    ||       4
2  |  3    ||       5
3  |  0    ||       3
3  |  1    ||       4
3  |  2    ||       5
3  |  3    ||       6
In other words, the device computes binary addition.

Now, keep the state identifications; but instead, read the binary numbers from left to right, with e. g. S11 being the least significant bit 20, S12 being 21, and likewise for S21 and S22. L1 then corresponds to 20, L2 to 21, and L3 to 22. Then, the above table becomes:

Code:
x1 |  x2   ||  f'(x1, x2)
-----------------------
0 |   0    ||       0
0 |   2    ||       4
0 |   1    ||       2
0 |   3    ||       6
2 |   0    ||       4
2 |   2    ||       2
2 |   1    ||       6
2 |   3    ||       1
1 |   0    ||       2
1 |   2    ||       6
1 |   1    ||       1
1 |   3    ||       5
3 |   0    ||       6
3 |   2    ||       1
3 |   1    ||       5
3 |   3    ||       3
This is a perfectly sensible function, a perfectly sensible computation, but it's not addition. Hence, the device can be seen to compute distinct functions on an exactly equivalent basis.

I should not need to point that out, but of course, I could've used any number of different mappings, obtaining a different computation for each. I could've considered 'u' to mean '0' and 'd' to mean '1'; I could've swapped the meaning (indepentently) for the lamps; I could've considered the four switches to represent one four-bit digit; and so on. Each of these yields a perfectly sensible, and perfectly different, computation.

The one thing I require for a system to implement a computation is that it can be used to perform that computation. Hence, my calculator implements arithmetic, because I can use it to do arithmetic. In the same way, I can use the device above to add binary numbers: suppose I have the numbers 2 and 3, and didn't already know their sum, the device could readily tell me---provided I use the correct interpretation!

But the latter is exactly the same for ordinary computers. If I try to compute the square root of 9, and my device displays 3, but I think that it's the Cyrillic letter Ze, I won't know the square root of 9.

Quote:
Originally Posted by begbert2 View Post
I claim it's obvious that only one approach was used. Your interpretation of the result is completely irrelevant, particularly to what was going on inside the box. The inside of the box does whatever the inside of the box does, and your observation of the output and your interpretation of those observations have no effect on the box.
Fine. So now's your time to shine: put me in my place by demonstrating how the above internal functioning of the box singles out one among its many possible computations. It's 'obvious', after all. Put up or shut up!

Quote:
You're really, really coming off a somebody who doesn't understand simulations, here. I'm not really sure how to explain simulations in brief, so I'll just say "I accept that you think you've refuted P2, but you really, really haven't." Suffice to say that writing "john felt a pain in his hip" is not a particularly detailed and complete emulation.
So, if I described the pain more completely, then it would be magically conjured into being? How complete does the description have to be in order to count? Who is it that's experiencing the pain? Do you think my words suffice to bring experiencing entities into the world?

Last edited by Half Man Half Wit; 05-24-2019 at 02:46 AM.
  #205  
Old 05-24-2019, 03:19 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
The table below corrects two mistakes in the above one.

Code:
      S11 S12 | S21 S22 || L1 L2 L3
      -----------------------------
       d   d  |  d   d  || x  x  x
       d   d  |  d   u  || x  x  o        (A1 yields h --> L3 o)
       d   d  |  u   d  || x  o  x        (A2 yields h, A3 yields h --> L2 o)
       d   d  |  u   u  || x  o  o        (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
       d   u  |  d   d  || x  x  o        (A1 yields h --> L3 o)
       d   u  |  d   u  || x  o  x        (B1 yields h, A3 yields h, L2 o)
       d   u  |  u   d  || x  o  o        (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
       d   u  |  u   u  || o  x  x        (B1 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
       u   d  |  d   d  || x  o  x        (A2 yields h, A3 yields h --> L2 o)
       u   d  |  d   u  || x  o  o        (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
       u   d  |  u   d  || o  x  x        (B2 yields h, C1 yields h --> L1 o)        
       u   d  |  u   u  || o  x  o        (B2 yields h, C1 yields h --> L1 o, A1 yields h --> L3 o)
       u   u  |  d   d  || x  o  o        (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
       u   u  |  d   u  || o  x  x        (B2 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
       u   u  |  u   d  || o  x  o        (A1 yields h --> L3 o, B2 yields h, C1 yields h --> L1 o)
       u   u  |  u   u  || o  o  x        (B1 yields h, A3 yields h --> L2 o, B2 yields h, C1 yields h --> L1 o)
  #206  
Old 05-24-2019, 05:34 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by wolfpup View Post
I appreciate the effort you made to once again detail your argument, but I find the view that there is some kind of fundamental difference between an abstract Turing machine and a physical one because the former manipulates abstract symbols and the latter manipulates physical representations to be incoherent. They are exactly the same. The Turing machine defines precisely what computation is, independent of what the symbols might actually mean, provided only that there is a consistent interpretation (any consistent interpretation!) of the semantics.
Perhaps my above tour-de-force helps to clarify the difference between a Turing machine and its physical instantiation. The difference is the same as between an AND-gate and its physical instantiation: an AND-gate's output is '1' if and only if both of its inputs are '1'. This can be realized, physically, by a device that, for example, outputs a high voltage 'h' if and only if both of its input voltages are 'h'---if we choose to interpret 'h' to mean '1'. So the abstract AND-gate has a physical instantiation in my B-type boxes once the appropriate interpretation is made.

But of course, there's nothing special about that interpretation. I could just as easily consider low voltage, 'l', to mean '1'. In that case, the B-type boxes implement the (abstract) OR-gate.

So, the nature of the AND- (or OR-) gate is defined by reference to the abstract binary values '0' and '1'; their physical implementation is realized by an appropriate mapping of voltage levels to logical values. The AND-gate, as an abstract object, thus is perfectly definite, but its physical realization acquires an interpretation dependence---simply because physical objects don't directly operate on logical values, but on things like voltage levels. It's the same with abstract Turing machines and their physical realizations.

Quote:
Let me re-iterate one of my previous comments. Our disagreement seems to arise from your conflation of "computation" with "algorithm". The question of what a "computation" is, in the most fundamental sense, is quite a different question from what problem is being solved by the computation. Your obsession with the difference between your f and f' functions is, at its core, not a computational issue, but a class-of-problem issue.
I trust my previous comment makes it clear why the algorithm doesn't really come into play here. If not, let me try and spell it out.

The algorithm computing binary addition that my box uses is given by the identification of physical states, carried through its specific wiring as given above. So, a box like A1, if we take the identification 'u' --> '1', 'd' --> '0', 'h' --> '1' and 'l' --> '0', then corresponds to the pseudocode:
Code:
IF (S12 = 1 AND S22 = 0) OR (S12 = 0 AND S22 = 1)
     A1 = 1 
ELSE 
     A1 = 0
END IF
With replacements such as this one, you get the entire algorithm computing the sum of two two-bit numbers.

However, when you change the identification, then, also, the algorithm changes. Say, I interpret 'u' --> '0', 'd' --> '1', 'h' --> '0' and 'l' --> '1'. Then, A1 instead corresponds to the pseudocode:
Code:
IF (S12 = 1 AND S22 = 1) OR (S12 = 0 AND S22 = 0)
     A1 = 1 
ELSE 
     A1 = 0
END IF
Consequently, a change of the interpretation changes the function being computed, as well as the algorithm by means of which it is computed.

Now, it's of course also possible to change the algorithm, while computing the same function. To do so, one need merely change the wiring diagram, while leaving the input-output mapping the same. There are, of course, infinitely many wirings that lead to the lamps lighting up as the switches are pressed in the same way as they do in my example. Tracking the meaning of the symbols through these different wiring patterns in the manner explained above may then yield distinct algorithms computing the same function---just like you can compute the square root by the Babylonian or the digit-by-digit method.

Finally, having it all explicitly laid out like that should also put to rest any claims of 'strong', or 'kinda strong', emergence in computers. For everything that a computer does, a story such as the one told above reducing its behavior (in the example, the lamps lighting up in response to the switches being pressed) to that of its lower-level components, thus showing exactly how that lower level causes the high-level behavior. Consequently, only the lower-level description needs to be specified to fully specify everything about a computer, and to enable us to find the causes of every aspect of the high-level behavior within the lower level.
  #207  
Old 05-24-2019, 09:38 AM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,092
Ah, a fresh, shiny new morning.

Half Man Half Wit, this will probably piss you off, but I think that you're getting way too hung up on your overcomplicated example. Your argument doesn't depend on the specific function of the box, after all - it's supposed to be a generically applicable argument - one that can be applied to any calculative-or-theoretically-calculative object (like, say, the brain). In fact I'm pretty sure that the following example is equally descriptive of your argument:

There is a box. It has a button and a light.
Alice pushes the button and the light comes on. She releases the button and it turns off. Alice shrugs and says, "I guess the button operates the light".
Bob pushes the button and the light comes on. He jumps back, releasing the button - it turns off. "Ahhh! Box possessed by demons! All going to die!"

All the elements are there: a box, a deterministic functional mapping from inputs to outputs (pushed => on; not-pushed => off) and, varying interpretations. I do not see how this example differs from your example in any meaningful way (other than seemingly-unnecessary complexity).

I will be honestly surprised if you disagree that this example is equivalent to your example.


Quote:
Originally Posted by Half Man Half Wit View Post
Fine. So now's your time to shine: put me in my place by demonstrating how the above internal functioning of the box singles out one among its many possible computations. It's 'obvious', after all. Put up or shut up!
You know, I'm not sure whether you ever did define what you think "computation" means, but when I ask Google to give me a definition for the term, it coughs up "the action of mathematical calculation". And by your own statement only one 'action', one mapping from inputs to outputs, is taking place here. And also by your own statement the only difference between the "calculations" is in the eye of the beholder. Which means there's really only one calculation, and two interpretations.

The scenario is exactly congruent to is one persons saw a piece of paper lying on the table and saw "WOW", and then another person came up on the other side of the table and say "MOM".

So here's a question for you: Can I take your argument, apply it to that piece of paper rather than your box with lights and switches, and prove something? Can I prove that print is self-contradictory?

If not, why not?


Quote:
Originally Posted by Half Man Half Wit View Post
So, if I described the pain more completely, then it would be magically conjured into being? How complete does the description have to be in order to count? Who is it that's experiencing the pain? Do you think my words suffice to bring experiencing entities into the world?
Have you ever played a video game? One of the new-fangled ones where you can shoot things - Space Invaders, Asteroids, that kind of thing. In those you can push buttons to move your little digital ship around and make it shoot pixels. And if one of those pixels you shot happens to run into the little cluster of pixels representing an asteroid or an alien, then the behavior of that thing changes - if memory serves it turns into a little drawing of lines coming out of a point and then disappears.

So what you are definitely seeing is that the game is reacting to your actions. The things on the screen are just pixels, but the reaction is really happening. It's a real reaction. Something happens, and a stimulus triggers a reaction in the active running process of the game. It is a real reaction.

I'll stop here because I have to get ready for work, and you're probably already shouting that there is absolutely nothing similar to how Asteroid.EXE recieves an input registers the input, and alters its behavior in response, and how I stub my toe, register the input, and howl in pain. So I'll let you get right on typing your reaction now - but suffice to say, however different the example is, it does establish a digital entity that can experience and really truly react to stimuli.

(Whether you call that entity the asteroid/alien or the game as a whole, of course, is a matter of interpretation. )
  #208  
Old 05-24-2019, 09:56 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Ok, so you're going for the 'babble on' option, rather than actually putting your money where your mouth is, and make good on your claims about how obvious it is what my box computes. Expected, but still disappointing. At least, now I can say I've tried everything I could.
  #209  
Old 05-24-2019, 10:27 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by RaftPeople View Post
The question I'm asking is whether the nature of the machinery performing a transformation determines whether something is a computation or not, from your perspective.

It sounds like you are saying that if HMHW's box performs the transformation you listed (0110011 into 0100010) then that is considered a computation, right?

Meaning that HMWH's box may not be a Turing machine, it may just be a circuit that performs that transformation, but regardless of how it arrives at the correct answer, the transformation is considered a computation, right?
Wolfpup, re-posting so it doesn't get lost in the noise, hoping to get a more clear understanding of your position.
  #210  
Old 05-24-2019, 10:30 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by RaftPeople View Post
But wouldn't you agree that no matter how good the simulation of a black hole, there is no actual gravity produced in the real world, just a bunch of numbers that happen to map to the same set of transitions that the measured particles of a black hole would go through if we could measure them?
Begbert, thoughts?
  #211  
Old 05-24-2019, 10:32 AM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,092
Quote:
Originally Posted by Half Man Half Wit View Post
Ok, so you're going for the 'babble on' option, rather than actually putting your money where your mouth is, and make good on your claims about how obvious it is what my box computes. Expected, but still disappointing. At least, now I can say I've tried everything I could.
You haven't defined what 'computes' means to you.

In real-world terminology, what the box "computes" is its output. See that laboriously-crafted mapping you made from inputs to outputs? That's what it "computes". It "computes" which lights to light up when the input switches are set in various ways. That's what the box does. And golly gosh, it's entirely consistent and non-self-contradictory about it.

I strongly suspect that when you say "computes" you are (deliberately?) conflating the actions of the box and the actions of the observer - and then blaming everything on the box. This is, of course, an invalid approach regarding a discussion of the actions, behaviors, and properties of the box.

One reason I think you're deliberately doing this obviously invalid thing deliberately is because, well, you because you're pretty clear about the fact you're doing it deliberately. One reason I think you're at some level aware that it's invalid is because if you weren't aware it was invalid, you wouldn't be averse to applying your argument to the WOW/MOM paper and seeing where the logic takes you.
  #212  
Old 05-24-2019, 10:44 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by Voyager View Post
Do you accept that a simulation of a computer running a program produces the same results as the actual computer running the program? That's a more relevant example here.
The gravity example illustrates that there are some attributes of systems that can't be reproduced by a representative simulation (representative meaning it's just a bunch of numbers+symbols that transition in the same way as the original IF we interpret them according to some set of rules (e.g. these numbers represent the position, mass, etc. of each particle)).

We don't know if consciousness is more like a physical attribute like gravity, or if it's a logical attribute that can be created by performing the right transformations in the right sequence (and interpreting the results as external observers, (e.g. yep we just got sequence 101100101001, that represents consciousness when the input is X and the previous state is Y).
  #213  
Old 05-24-2019, 11:06 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by begbert2 View Post
You haven't defined what 'computes' means to you.
I have, actually, in a response to RaftPeople, I believe.



Quote:
In real-world terminology, what the box "computes" is its output. See that laboriously-crafted mapping you made from inputs to outputs? That's what it "computes".
If that's the case, then all a modern computer computes are pixel patterns on a screen. But that's not what we typically believe: rather, we think it computes, for instance, planetary orbits. But I don't think even you are gonna argue that planetary orbits are among the outputs of any computer. Not to mention the impossibility, in this case, of computing things like square roots, since they are formal objects, not physical properties. Likewise for truth values.

Besides, I have already refuted this possibility: if computation truly were nothing but the physical evolution of the system---which what you're saying boils down to---then computationalism is trivial, and collapses onto identity theory.

So if that's why you made such a stink about how obviously stupid my arguments are, you're really not bringing the bacon.
  #214  
Old 05-24-2019, 11:06 AM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,092
Quote:
Originally Posted by RaftPeople View Post
But wouldn't you agree that no matter how good the simulation of a black hole, there is no actual gravity produced in the real world, just a bunch of numbers that happen to map to the same set of transitions that the measured particles of a black hole would go through if we could measure them?
Don't worry, I hadn't forgotten you. There's just a limit to how many things I can do at once.

Would you agree that no matter how painful a stubbed toe is to you, there is no pain pain produced in me, just a surprised reaction at the fact you just yelled (and that I somehow heard it over the internet)? The body, and the simulation, are mostly-contained systems. Events that happen inside the system aren't necessarily communicated to things outside the system in the same way they occur in-system. Even if the things outside the system are capable of recognizing and experiencing the events the same way that the in-system things do!

Black holes are a little dangerous, so let's talk about cakes for a moment. (Mmm, cake.)

If there's a delicious cake sitting on the table in front of you, you see it - the image of it appears in your mind. But the image of the cake didn't appear directly in your mind - what happened was that light in the environment impacted the surface of the cake, energizing the molecules; the molecules then emitted altered light in an effort to get back to a lower energy state. The altered light is emitted out in all directions and impacts lots of things, including your eye. The various translucent physical parts of your eye bounce the light around in a controlled way until it hits the cells of your retinal, which react photochemically. These cells are adjacent to other cells that are triggered electrochemically, which are adjacent to other cells which are triggered electrochemically, which are adjacent to other cells - okay, there are a lot of cells involved. Eventually this telephone game of electrochemical signals is handed off into your brain, where tiny bug-eyed alien driving you sees the image of the cake on its display monitor.

So yeah, there's a whole lot of photonic, physical, and chemical signals being sent every which way to send you the image of that cake. It's not just as simple as seeing it.

Now consider a simulated cake. Let's pretend for a moment that it's a really good simulation, that's simulating things at the level of physics. So anyway, some numbers are generated somewhere that simulate photons flying through the air. The simulator process decides that these photons should be impacting the simulated cake's surface, removes the photon, and adjusts the energy value of the molecules accordingly. Then on a later pass the simulator decides that the energized molecules would rather not be energized, and reduces their energy variable while generating altered photons flying in various calculated directions. And then these altered photons are calculated as flying through the air, and then - don't impact your eye because you're not in the simulator, you're in the real world.

So you can't see the cake; you can only see that a bunch of numbers are produced that happen to map to the same set of transitions that the measured particles of light would go through if we could measure them.

But wait! You have a set of VR goggles! And the simulation is conveniently designed to output its numbers to those goggles, causing their surfaces to electrochemically emit altered light in the direction of your eyeballs, and, yay! You can see it! You can see the simulated cake! And you're really seeing it, too - the chemical process in your eyes was genuinely triggered in the same way that they did when they viewed the real cake! You are genuinely seeing the simulated cake - to exactly the same degree that you saw the real one. The physical processes you used were the same.

You then put on your super-sophisticated VR gloves and reach out and touch the simulated cake, feeling its simulated moistness the same way you would feel real moistness, with the same physical processes in your body being triggered. You are really feeling the simulated cake!

So then you lift it to your mouth, and -oh, boo. You don't have a sophisticated VR tongue wrap. You can't taste the cake! The cake is a lie!

Except that the only reason you can't taste the simulated cake is because your sophisticated tongue wrap is in the shop. If it wasn't then you really could taste the cake, through a series of complicated physical, chemical, and electrical interactions comparable to how you experience real cake.

So there's your answer - the only reason you can't feel the black hole is because nobody's created a device to communicate the gravity numbers withing the simulation system in the way that our biological systems can understand. (And you know that about ten minutes after they do design such a device, we're all gonna die.)

TL;DR: The only reason that you think you experience reality directly is because your brain and body (and to some degree all of outside reality) aren't bothering to inform you of all the complicated processes that exist to transmit that information to you. You can't experience simulations directly because no such processes exist to transfer the information - unless they do.
  #215  
Old 05-24-2019, 11:15 AM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,092
Quote:
Originally Posted by Half Man Half Wit View Post
I have, actually, in a response to RaftPeople, I believe.
I will readily concede that I haven't been reading your discussion with RaftPeople, because it's exhausting, technical, and (in my eyes) ultimately beside the point. Your argument collapses long before the technical details come into play, because your argument is erroneously trying to blame the box for the activity of the observer.

But that's okay. If the definition is so secret (or complicated) that you can't repeat it for me, I'll just assume that it's something to do with you screwing up and trying to claim that your observations of the black box somehow impact what's happening inside it.

Quote:
Originally Posted by Half Man Half Wit View Post
If that's the case, then all a modern computer computes are pixel patterns on a screen. But that's not what we typically believe: rather, we think it computes, for instance, planetary orbits. But I don't think even you are gonna argue that planetary orbits are among the outputs of any computer. Not to mention the impossibility, in this case, of computing things like square roots, since they are formal objects, not physical properties. Likewise for truth values.

Besides, I have already refuted this possibility: if computation truly were nothing but the physical evolution of the system---which what you're saying boils down to---then computationalism is trivial, and collapses onto identity theory.
All that reality "computes" is particles moving around. The typical notion that there are such a thing as "planets" is mistaken; that's just a coincidental arrangement of particles that may or may not be moving in the same direction (I haven't checked).

Quote:
Originally Posted by Half Man Half Wit View Post
So if that's why you made such a stink about how obviously stupid my arguments are, you're really not bringing the bacon.
Until you explain to my how your argument doesn't work on the WOW/MOM paper, I've not only brought the bacon, but set it in front of you with a bit of parsley garnish on the side. You just haven't decided to bite.
  #216  
Old 05-24-2019, 12:56 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by begbert2 View Post
Would you agree that no matter how painful a stubbed toe is to you, there is no pain pain produced in me, just a surprised reaction at the fact you just yelled (and that I somehow heard it over the internet)? The body, and the simulation, are mostly-contained systems. Events that happen inside the system aren't necessarily communicated to things outside the system in the same way they occur in-system. Even if the things outside the system are capable of recognizing and experiencing the events the same way that the in-system things do!

Black holes are a little dangerous, so let's talk about cakes for a moment. (Mmm, cake.)
I appreciate you provided a lot of detail in your answer, but you answered a different question which makes it a little tougher to just move the conversation along incrementally.

If I interpret your answer correctly, it sounds like the following is accurate from your perspective, please confirm:
1 - The simulation does not create the same gravitational effects that a real black hole does in our world
2 - Within the simulation, you would argue that the attributes of the simulation (numbers, symbols, whatever) generate effects relative to the simulation system that is just as real as gravity in the real world
  #217  
Old 05-24-2019, 01:15 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,092
Quote:
Originally Posted by RaftPeople View Post
I appreciate you provided a lot of detail in your answer, but you answered a different question which makes it a little tougher to just move the conversation along incrementally.

If I interpret your answer correctly, it sounds like the following is accurate from your perspective, please confirm:
1 - The simulation does not create the same gravitational effects that a real black hole does in our world
2 - Within the simulation, you would argue that the attributes of the simulation (numbers, symbols, whatever) generate effects relative to the simulation system that is just as real as gravity in the real world
The simulation does create the same gravitational effects that a real black hole does in our world - it's just that the only things that experience those effects are other things in the simulation. Which sounds a lot like what you said, but the discrepancy I'm seeing is that you're giving reality some unjustified slack that you're denying the simulation. Reality is nothing more than "numbers, symbols, whatever" when you break it down and examine it at the level of what the particles are doing to one another. Particles in reality effect each other; simulated particles in the simulation effect one another. The 'action' of effecting is equally real in either case - something is really and truly responding to something in both cases. The only difference is that as entities that are outside of the simulation and thus not subject to the rules (read: laws of physics) of the simulation, we're not effected the same way as something in-system is.
  #218  
Old 05-24-2019, 06:11 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,955
Quote:
Originally Posted by RaftPeople View Post
Wolfpup, re-posting so it doesn't get lost in the noise, hoping to get a more clear understanding of your position.
ISTM that you're just restating exactly the same question that I already answered in post #194. Again, if you regard computation as a process, then Turing provided a very good description of the fundamental nature and limits of that process. If you regard it solely in terms of the results it produces, then it can be viewed as the mapping of a set of input symbols to a set of output symbols, which can be used to characterize any computation. They are different questions. For purposes of this discussion, HMHW regards his hypothetical box as a computing device based on the latter criterion, specifically disclaiming the relevance of anything going on inside it, and I have no quarrel with that.
  #219  
Old 05-24-2019, 06:35 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,955
Quote:
Originally Posted by Half Man Half Wit View Post
Come on, now. You're fully aware that the core claim of CTM is, as wikipedia puts it, ...
Come on, yourself. I made it very clear in posts #179, #187, and #196 that you either don't understand the computational theory of mind or have misrepresented it in order to support your failed and unsupportable hypothesis about the subjectivity of computation. Specifically, you made it clear that your position is that the mind cannot be computational because if it were, it would itself require an interpreter and that would lead to an infinite regress (the homunculus fallacy). You even went so far as to claim that CTM theory had been "dismantled" -- by none other than Putnam himself!

You then desperately starting backpedaling when I showed that, far from having been "dismantled", CTM plays a vital central role in cognitive science. You seemingly tried to obfuscate your position by trying to characterize it as some nebulous allusion to "computational modeling"; but as I showed in the quote in #196, CTM is absolutely not a metaphor and holds that mental processes are literally computations, and indeed Fodor laid out a detailed theory of exactly how those computations are carried out.

It's very difficult to argue with someone who either cannot admit he was wrong, or who appears to have difficulty with comprehension of plain English.

Quote:
Originally Posted by Half Man Half Wit View Post
So no, the algorithm has no bearing on whether the system implements f or f'.
You're right, and I wanted to acknowledge this. I was nonetheless elucidating an important concept but my introduction of the term "algorithm" here was not useful. This sort of thing sometimes happens when I type more quickly than I think, and it popped into mind because algorithms are frequently thought of as specifications for solving problems. The real point I was getting at is that the only distinction between your functions f and f' is a class-of-problem distinction and not a computational distinction, because the computation is invariant regardless of whether it is observed by someone interested in solving f class of problems, someone interested in solving f' class of problems, or a squirrel.

Quote:
Originally Posted by Half Man Half Wit View Post
Perhaps my above tour-de-force helps to clarify the difference between a Turing machine and its physical instantiation. The difference is the same as between an AND-gate and its physical instantiation: an AND-gate's output is '1' if and only if both of its inputs are '1'. This can be realized, physically, by a device that, for example, outputs a high voltage 'h' if and only if both of its input voltages are 'h'---if we choose to interpret 'h' to mean '1'. So the abstract AND-gate has a physical instantiation in my B-type boxes once the appropriate interpretation is made.

But of course, there's nothing special about that interpretation. I could just as easily consider low voltage, 'l', to mean '1'. In that case, the B-type boxes implement the (abstract) OR-gate.

So, the nature of the AND- (or OR-) gate is defined by reference to the abstract binary values '0' and '1'; their physical implementation is realized by an appropriate mapping of voltage levels to logical values. The AND-gate, as an abstract object, thus is perfectly definite, but its physical realization acquires an interpretation dependence---simply because physical objects don't directly operate on logical values, but on things like voltage levels. It's the same with abstract Turing machines and their physical realizations.
Your "tour-de-force" was a tour-de-fizzle. It asounds me that you think this proves anything different than your original box with lights and switches; that is to say, it astounds me that you think it proves anything at all. It's exactly the same, and as such, your latest version of the same fallacy is subject to a trivial deconstruction, as follows. There is a useful kind of logic gate that outputs a high voltage (say) if and only if both of its inputs are also high (otherwise different voltage inputs result in a low-voltage output). There is another useful kind that outputs a high voltage if either of its inputs are high (so therefore a low voltage output if and only if both inputs are low). If you choose "high voltage" to mean "1" or "TRUE", then you get to label the first kind an "AND" gate and the second kind an "OR" gate. If you choose the reverse interpretation, the labels are reversed, that's all. For me to build a useful computer, I need both kinds, and all that is required is a consistent interpretation, but the interpretation itself is completely arbitrary. You have fallen prey to another class-of-problem fallacy. Engineers in some foreign country with a reversed interpretation of the Boolean values of voltage levels could still order exactly the same logic gates and build exactly equivalent computers; they would just be annoyed at having to change all the labels!
  #220  
Old 05-24-2019, 07:02 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by wolfpup View Post
ISTM that you're just restating exactly the same question that I already answered in post #194.
You did respond, but it had phrases like "if your..." which sounded like you were exploring hypothetical angles a person might consider, but I'm trying to clearly understand what you are considering a computation.

For example, I was thinking it would be possible to answer the following questions with just a yes or no regarding computation:
1 - A lookup table that maps 0110011 to 0100010 (from your example) - is this an example of a computation? Yes or No

2 - A simple circuit that can only perform the mapping it has been built to perform, and it happens to map 0110011 to 0100010 - is this an example of a computation? Yes or No

3 - My laptop computer that is running a program that maps 0110011 to 0100010 - is this an example of a computation? Yes or No
  #221  
Old 05-24-2019, 07:16 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,360
Quote:
Originally Posted by Half Man Half Wit View Post
This is ridiculous. I don't have to know anything about what a computer does internally in order to check what it computes. I know that my calculator computes the square root, because if I enter a number, punch the SQRT-button, and read off the result, that result will be the square root of the number. What goes on on the inside is just a means to achieving that end; what means is used is wholly irrelevant.
Without the hardware, you don't know it computes the square root until exhaustively test it. Remember the famous Intel FDIV bug? Everyone knew that logic worked - until they found that it didn't in all cases.
If it were easy to prove that a circuit implements a function as you think, we could have saved a lot of electricity running simulations on thousands of processors.

Quote:
I've given the mappings in my post #93, but for convenience, let me repeat them here.

So, first, take 'u' to mean '1', 'd' to mean '0', 'o' to mean '1', and 'x' to mean '0'. Then, interpret the first two switches, S11 and S12, as one binary number, switches S21 and S22 as another, and L1-L3 as a three-bit binary number. Then, the table above becomes:

Code:
x1 |  x2   ||   f(x1, x2)
-----------------------
0  |  0    ||       0
0  |  1    ||       1
0  |  2    ||       2
0  |  3    ||       3
0  |  0    ||       0
1  |  1    ||       2
1  |  2    ||       3
1  |  3    ||       4
2  |  0    ||       2
2  |  1    ||       3
2  |  2    ||       4
2  |  3    ||       5
3  |  0    ||       3
3  |  1    ||       4
3  |  2    ||       5
3  |  3    ||       6
In other words, the device computes binary addition.

Now, keep the state identifications; but instead, read the binary numbers from left to right, with e. g. S11 being the least significant bit 20, S12 being 21, and likewise for S21 and S22. L1 then corresponds to 20, L2 to 21, and L3 to 22. Then, the above table becomes:

Code:
x1 |  x2   ||  f'(x1, x2)
-----------------------
0 |   0    ||       0
0 |   2    ||       4
0 |   1    ||       2
0 |   3    ||       6
2 |   0    ||       4
2 |   2    ||       2
2 |   1    ||       6
2 |   3    ||       1
1 |   0    ||       2
1 |   2    ||       6
1 |   1    ||       1
1 |   3    ||       5
3 |   0    ||       6
3 |   2    ||       1
3 |   1    ||       5
3 |   3    ||       3
This is a perfectly sensible function, a perfectly sensible computation, but it's not addition. Hence, the device can be seen to compute distinct functions on an exactly equivalent basis.
Your two output mappings are isomorphic. I don't care what you call it, they implement the same function.
I've got a better example for you. Add 1 to 65, and you get 66, right? Well, if you are interpreting these things as ASCII, A + 1 = B. But if you did this on the LGP-21 I learned to program on, which doesn't use ASCII, you don't get A at all.
Quote:
I should not need to point that out, but of course, I could've used any number of different mappings, obtaining a different computation for each. I could've considered 'u' to mean '0' and 'd' to mean '1'; I could've swapped the meaning (indepentently) for the lamps; I could've considered the four switches to represent one four-bit digit; and so on. Each of these yields a perfectly sensible, and perfectly different, computation.
No, you have the same computation. Or equivalent computations if you must. A function is not defined by what it does like addition, it is defined as a mapping from its input space to its output space, and in your case it would be easy to prove f and f' as being equivalent. I forget the proper term, it's been a while since I took this stuff.
Quote:
The one thing I require for a system to implement a computation is that it can be used to perform that computation. Hence, my calculator implements arithmetic, because I can use it to do arithmetic. In the same way, I can use the device above to add binary numbers: suppose I have the numbers 2 and 3, and didn't already know their sum, the device could readily tell me---provided I use the correct interpretation!
The ALU on your calculator produces a binary number, right? This is fed into a display which is supposed to show numbers. Say the display has a defect which causes it to isomorphically distort the numbers 0 - 9 to gibberish. Has your calculator stopped doing arithmetic?
Quote:
But the latter is exactly the same for ordinary computers. If I try to compute the square root of 9, and my device displays 3, but I think that it's the Cyrillic letter Ze, I won't know the square root of 9.
Is this the Copenhagen interpretation of arithmetic? If a tot pushes buttons on the calculator he is doing arithmetic despite the fact that he can't interpret what the display shows.
Quote:
Fine. So now's your time to shine: put me in my place by demonstrating how the above internal functioning of the box singles out one among its many possible computations. It's 'obvious', after all. Put up or shut up!
Easy. Same computation. And for f' you wired up your display wrong (have done it myself) but that changes nothing.
  #222  
Old 05-24-2019, 07:21 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,360
Quote:
Originally Posted by RaftPeople View Post
The gravity example illustrates that there are some attributes of systems that can't be reproduced by a representative simulation (representative meaning it's just a bunch of numbers+symbols that transition in the same way as the original IF we interpret them according to some set of rules (e.g. these numbers represent the position, mass, etc. of each particle)).

We don't know if consciousness is more like a physical attribute like gravity, or if it's a logical attribute that can be created by performing the right transformations in the right sequence (and interpreting the results as external observers, (e.g. yep we just got sequence 101100101001, that represents consciousness when the input is X and the previous state is Y).
If you're of in "the universe is a simulation" camp then your simulation of gravity would be gravity to people living in the simulation. But yeah, simulations only go so far. You can't produce gravity outside the simulation, though I've read some sf stories that kind of do this thing. Not very good ones, though.
  #223  
Old 05-24-2019, 07:25 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by Voyager View Post
If you're of in "the universe is a simulation" camp then your simulation of gravity would be gravity to people living in the simulation. But yeah, simulations only go so far. You can't produce gravity outside the simulation, though I've read some sf stories that kind of do this thing. Not very good ones, though.
I'm not in that camp, I was just trying to understand begbert2's position.
  #224  
Old 05-25-2019, 01:55 AM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,360
Quote:
Originally Posted by RaftPeople View Post
I'm not in that camp, I was just trying to understand begbert2's position.
Sorry, didn't mean to imply that you were. But that's the only situation I can think of where simulated gravity acts like real gravity.
  #225  
Old 05-25-2019, 03:03 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
A simulation of a black hole simulates the processes that occur when a black hole exerts gravity. It does not create real mass, or real gravity.
A simulation of a mind (if it existed) would simulate the processes in a mind. If the inputs and outputs of a simulated mind were the same as the inputs and outputs of a biological mind, then the mind would have been successfully simulated, by the terms of the hypothetical. Don't fight the hypothetical.
  #226  
Old 05-25-2019, 04:49 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by begbert2 View Post
But that's okay. If the definition is so secret (or complicated) that you can't repeat it for me, I'll just assume that it's something to do with you screwing up and trying to claim that your observations of the black box somehow impact what's happening inside it.
You assume lots of things. But actually, I was in error---I didn't clarify my notion of computation in response to RaftPeople, but rather, in response to you (I quoted it in response to RaftPeople). So while I have no reason to believe it'll stick with you any better than the first time around, here it is again:

Quote:
Originally Posted by Half Man Half Wit View Post
Computation is nothing but using a physical system to implement a computable (partial recursive) function. That is, I have an input x, and want to know the value of some f(x) for a computable f, and use manipulations on a physical system (entering x, pushing 'start', say) to obtain knowledge about f(x).

This is equivalent (assuming a weak form of Church-Turing) to a definition using Turing machines, or lambda calculus, or algorithms. What's more, we can limit us to computation over finite binary strings, since that's all a modern computer does.
Quote:
Originally Posted by begbert2 View Post
All that reality "computes" is particles moving around. The typical notion that there are such a thing as "planets" is mistaken; that's just a coincidental arrangement of particles that may or may not be moving in the same direction (I haven't checked).
It's a good reflection of the validity of the computationalist stance that in this thread, two people ostensible defending the same idea feel forced to resort to mutually exclusive, albeit equally ridiculous, stances---you, now essentially denying that anything ever computes at all, there being only the physical evolution (which, as pointed out copiously now, trivializes computationalism and makes it collapse onto identity theory physicalism), and wolfpup, who resorts to mystical emergence of some novel stuff that's just gonna magically fix the problem. I suppose I can take some solace in the fact that the both of you at least seem to realize that the straightforward 'vanilla' version of computationalism---a physical system executes a program CM which produces a mind M---isn't going to work.

But of course, neither of your strategies---reducing the execution of a program to merely being a rediscription of the physical evolution of the system, or claiming that something else will happen along once things get sufficiently complicated---is going to help computationalism any; in fact, both abandon it explicitly.

In particular, on your response, no physical system has ever computed a square root, since square roots aren't 'particles moving around'; yet of course, square roots are routinely computed. So tell me, what do you think somebody means when they say that they have computed the square root of 81 using their calculator?

You're also reneging on your earlier assertion that the internal wiring would make it obvious that 'only one function' is used to transform inputs to outputs:

Quote:
Originally Posted by begbert2 View Post
Your "thus" is flat wrong and stupid. When the box in your original argument transformed its input into its output it used a specific approach to do so. It didn't use all theoretically possible approaches to do so; it used one approach to do so. It doesn't use "different functions" to map the inputs to the result, it uses only one function to do so. Which function? Whichever one it used. You can't tell which from the outside, but frankly reality doesn't give crap what you know.

[...]

The internal wiring of the box is entirely, controllingly important to determining how that box functions. And more importantly to destroying your argument, it's important in that the fact that the internal wiring must exist and must implement a specific function completely destroys that assumption you're relying on.
Now, you're claiming that the box doesn't actually implement a specific function at all. In particular, you seem to have completely lost sight of the fact that the internal wiring was supposed to put matters of underdetermination to rest, rather now claiming that it's the input/output mapping after all that determines the computation:

Quote:
Originally Posted by begbert2 View Post
In real-world terminology, what the box "computes" is its output. See that laboriously-crafted mapping you made from inputs to outputs? That's what it "computes". It "computes" which lights to light up when the input switches are set in various ways. That's what the box does. And golly gosh, it's entirely consistent and non-self-contradictory about it.
For this mapping, of course, the internal wiring matters not one bit. The two quotes are thus in direct contradiction. So what, really, did knowing the internal wiring allow you to conclude?

Quote:
Originally Posted by begbert2 View Post
Until you explain to my how your argument doesn't work on the WOW/MOM paper, I've not only brought the bacon, but set it in front of you with a bit of parsley garnish on the side. You just haven't decided to bite.
Because that example trades on mistaking the form of the symbol, rather than interpreting its content differently. There is a syntactical difference between MOM and WOW, such that the same system, being given either one, may react differently; the point I'm trying to make is, however, related to semantic differences---see my earlier example of the word 'gift'.

For the box, your MOM/WOW example would be analogous to re-wiring the switches to the various boxes---thus, changing the way it reacts to inputs. But that's not what this is about.

Quote:
Originally Posted by wolfpup View Post
You then desperately starting backpedaling when I showed that, far from having been "dismantled", CTM plays a vital central role in cognitive science.
I have not backpedaled on anything, and frankly, your attempt to tar me with covertly shifting my position makes it seem like you're childishly trying to score a cheap 'win'.

My position, as outlined in the very first post I made here, is that the CTM, as entailing a claim that the mind is a computational entity, is indeed wrong, because there is at least one aspect of the mind that cannot be computational without lapsing into circularity, that of interpretation. I have, as soon as you started to claim that the falsity of CTM would undermine lots of current cognitive science, pointed out that that's fallacious (in post #103)---the fact that one can model certain aspects of the brain computationally does not depend on the claim that, as the SEP article puts it, 'the mind literally is a computing system'.

In other words, even if the mind is not (wholly) computational, computational modeling can be very useful. This is the position I've consistently held to during this whole discussion: CTM ('the mind literally is a computing system') wrong, computational modeling useful.

Quote:
The real point I was getting at is that the only distinction between your functions f and f' is a class-of-problem distinction and not a computational distinction, because the computation is invariant regardless of whether it is observed by someone interested in solving f class of problems, someone interested in solving f' class of problems, or a squirrel.
I don't think I understand what you mean by a 'class of problem'-distinction. But it's completely clear that they're different computations as defined via partial recursive functions. The typical understanding of computation would hold that since my f and f' fall into that class of functions, and are distinct members of that class, they're distinct computations, period.

Quote:
Your "tour-de-force" was a tour-de-fizzle. It asounds me that you think this proves anything different than your original box with lights and switches;
Oh, you're right, it absolutely doesn't---it was begbert2 who insisted that showing the box's innards adds anything to the issue, to the point of making it 'trivial' to decide which one it is. Now, he's back to claiming that it's solely the input/output table that characterizes the computation, but well, neither's a problem for my position, so which way he flip-flops isn't really of much consequence to me.

Quote:
There is a useful kind of logic gate that outputs a high voltage (say) if and only if both of its inputs are also high (otherwise different voltage inputs result in a low-voltage output). There is another useful kind that outputs a high voltage if either of its inputs are high (so therefore a low voltage output if and only if both inputs are low). If you choose "high voltage" to mean "1" or "TRUE", then you get to label the first kind an "AND" gate and the second kind an "OR" gate. If you choose the reverse interpretation, the labels are reversed, that's all. For me to build a useful computer, I need both kinds, and all that is required is a consistent interpretation, but the interpretation itself is completely arbitrary.
That's why I chose my example such that both f and f' follow from the same consistent interpretation of voltages and gates, yet still implementing different computations. So no, having a consistent assignment is not enough to specify one single computation.

Of course, the 'consistency'-requirement is wholly arbitrary. I could easily consider one switches' 'u' to mean '1', and another's to mean '0'. I could also just flip the interpretations of the lamps, leading to yet more computations. And so on. The key factor is always, as repeated at the top of this post, whether I can actually use a system to perform a computation. If I can, then, in my opinion, claiming that the system doesn't really implement that computation is meaningless sophistry.

Quote:
Originally Posted by Voyager View Post
Without the hardware, you don't know it computes the square root until exhaustively test it.
That's not right, actually. The hardware on its own will tell you very little, in general, about what is computed---the reason being Rice's theorem: every non-trivial property of an algorithm is undecidable. (Just think about the question of whether the computation will halt: just knowing the hardware won't tell you in general---other than by explicit simulation.)

So basically, a computation is individuated by its mapping of inputs to outputs. After all, that only makes sense: a calculation likewise is just the result of some set of mathematical operations. The calculation doesn't differ regarding what process you used to arrive at the result; whether you've used the Babylonian method or the digit-by-digit method, in both cases, you've computed the square root.

Quote:
If it were easy to prove that a circuit implements a function as you think, we could have saved a lot of electricity running simulations on thousands of processors.
I think the opposite---there is no fact of the matter regarding which function a circuit implements, unless it is suitably interpreted.

Quote:
Your two output mappings are isomorphic. I don't care what you call it, they implement the same function.
If that were the case, then all functions with the same domain and codomain would be the same function, since there is always an isomorphism (a trivial permutation of elements) linking any two of these functions. Then, there would be no computing the square root, for instance, as computing the square root is trivially isomorphic to 'computing the square root + 1', and still, if we require a student to calculate a square root in an exam, and they take out their calculator to do so, we will mark them down if they calculated the square root + 1.

So no, isomorphism does not make the computations the same, without robbing the word 'computation' of all its usual meaning. Furthermore, if you intend to identify all such computations, then again, what you're left with will simply be a restatement of the box's physical evolution---of which lamps light up in response to which switches are flipped. In that case, as already pointed out, the claim that distinguishes computationalism from simple identity theory physicalism evaporates, and the theory developed to address grave problems with the latter (such as multiple realizability) collapses onto it.

Quote:
The ALU on your calculator produces a binary number, right? This is fed into a display which is supposed to show numbers. Say the display has a defect which causes it to isomorphically distort the numbers 0 - 9 to gibberish. Has your calculator stopped doing arithmetic?
Say my box has a defect in one lamp that changes the way it's wired up. Thus, its input-output behavior changes. Does it still compute the same function?

Say you've never seen a calculator before, and find the one with a defect display. Do you know what it computes, anyway?

We recognize the calculator as defective, because it fails to fulfill its intended function. But, provided you don't want to argue that what computation a system implements does not teleologically depend on the intentions of its maker (a position that would be disastrous for the computational theory of mind), once you strip that intention away, a system that fails to compute some function may well successfully compute another---it's just that that's not one we're necessarily interested in.

Last edited by Half Man Half Wit; 05-25-2019 at 04:51 AM.
  #227  
Old 05-25-2019, 05:23 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by Voyager View Post
If you're of in "the universe is a simulation" camp then your simulation of gravity would be gravity to people living in the simulation. But yeah, simulations only go so far. You can't produce gravity outside the simulation, though I've read some sf stories that kind of do this thing. Not very good ones, though.
It always gives me pause that nobody ever acknowledges the sheer insanity of the position that a simulation literally creates some sort of mini-pocket universe with its own laws of physics and the like. Think about what that would entail: somehow, the right pattern of voltages in a desk-sized apparatus suffices to conjur up, say, an entire solar system, complete with things like gravity, radiation, electromagnetic fields and all---where, even, does the energy to create all that come from?---which nevertheless is somehow hermetically closed off from us, except for some small window which somehow allows us to peek in on it, while still screening off all that gravitation and so on.

The universe from the simulation is thus both connected to ours---we can, after all, look in on it, which is the whole point of the simulation---and completely disconnected---there are no effects from gravity outside of the simulation.

But it gets weirder than that. We could also envision a computer built not from something as ephemeral as voltage patters, but rather, a mechanical device, that does nothing but shuffle around signs on paper (as in a Turing machine)---yet, the right sort of signs somehow suffice to conjure up stars and planets and all. This is almost literally the idea of magic---write down the right formula, and just by virtue of that, stuff happens.

Moreover, I trust that where a 'simulation' of a universe is held to be a universe, in some sense, a simple recorded movie of that simulation won't be one. Otherwise, we're back to a simple description being all that's needed to make something real, in which case certain crime/horror writers would have a lot to answer for. So we'll stipulate that some mere 'description' like a movie doesn't 'call into being' a universe the way its actual 'simulation' does.

But where does a description end, and a simulation begin? In a simulation, one might hold, the various states of the system are connected by an internal logic, such that the state at t + 1 follows from the state at t. In a movie, however, I can, in principle, shuffle around the frames every which way; there's no 'computational work' being done to connect them.

But then, suppose I compress the movie. Typical compression schemes will take certain frames of the movie (key frames), and compute successive frames as changes from those, to save space encoding each frame. Then, there is computational work being done to connect the different frames.

In fact, a simulation can be seen as merely a highly compressed version of a movie. The formulas computing the orbit of planets, say, can be abstracted from simply observing how they move around---as, in fact, they have been. So it's entirely reasonable to imagine that a highly developed compression algorithm could compress the movie showing our simulation of a solar system in such a way as to infer the original simulation's basic equations from it---in fact, exactly this has already happened: ten years ago, a machine inferred Newton's laws from observations; these sorts of endeavors have blossomed in recent years, with neural networks now routinely inferring physical laws.

But then, must I be weary when compressing a movie, lest I compress it too much and it turns into a simulation, thus accidentally calling a whole new universe into being?

I think this is an absurd consequence, and the correct answer is that no matter how much I compress, or simulate, I always end up with what merely amounts to a description of something, and no description entails the reality of the things that it describes. So a simulation of a universe doesn't call a universe into being any more than writing about a universe does. Likewise, a simulated pain isn't any more a real pain than a pain written about is.

Sure, there are certain kinds of things that are equivalent among descriptions and the things described. For instance, if I write 'Alice calculated that the sum of 2 and 3 is 5', then the sentence contains that fact just as much as any real calculation does; but it does not follow, from there, that when I write 'Alice saw a tree', there is actually a tree that is being seen by Alice, nor does 'Alice had a headache' imply that there is somebody named Alice who is in actual pain. And exactly like that is it with computer simulations, as well.
  #228  
Old 05-25-2019, 08:07 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
Likewise, a simulated pain isn't any more a real pain than a pain written about is.
A fully simulated pain would be just as painful as a real pain, because it would be a real pain.
  #229  
Old 05-25-2019, 08:51 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
But, provided you don't want to argue that what computation a system implements does not teleologically depend on the intentions of its maker (a position that would be disastrous for the computational theory of mind), once you strip that intention away, a system that fails to compute some function may well successfully compute another---it's just that that's not one we're necessarily interested in.
Exactly. The only computation we are interested in is related to the one we input, and the one that comes out as an output. All the other computations are garbage. I do not claim that they do not exist- just that they are irrelevant.
If I type FISH into my laptop, the word 'FISH' appears on the screen - this is its teleological function. None of your garbage computations affect that, so they are irrelevant. If the computer fails to perform the one that I want, I do not care if the computer is still capable of performing other garbage computations - if it doesn't do the one I want it is useless.
  #230  
Old 05-25-2019, 08:52 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by eburacum45 View Post
A fully simulated pain would be just as painful as a real pain, because it would be a real pain.
Good to hear. A question begged is a question answered, I guess.
  #231  
Old 05-25-2019, 09:02 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by eburacum45 View Post
Exactly. The only computation we are interested in is related to the one we input, and the one that comes out as an output. All the other computations are garbage. I do not claim that they do not exist- just that they are irrelevant.
As I do not seem to tire pointing out to you, that's barking up the wrong tree. Inputs and outputs are the same for all these computations; again, that's the very point of my examples. There is not one computation that produces these outputs (in terms of symbols on a screen) from those inputs (in terms of switches flipped, or keys pressed), but rather, they all do.


Quote:
If I type FISH into my laptop, the word 'FISH' appears on the screen - this is its teleological function.
That's neither an example of teleology, nor of a function, anymore than writing 'fish' on a piece of paper is. But that's actually beside the point. The issue is with what a given set of symbols---such as FISH---means. A difference in the interpretation of these symbols---which, once more, will be the same for each of the computations under discussion---will lead to a difference in computation. Just as there is nothing about the symbols FISH that relates in any way, shape or form to actual aquarian animals, there is nothing about a certain lamp that makes it mean 22 rather than 20, say. But with that difference, the computation performed will likewise differ.

Last edited by Half Man Half Wit; 05-25-2019 at 09:03 AM.
  #232  
Old 05-25-2019, 09:04 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
You are confusing processes with physical objects again, I see. If we can simulate the process of feeling a pain, then we would be simulating a real pain. If we could simulate the process of consciousness, we would be simulating real consciousness. Pain and consciousness are in a different category to mass and temperature, because they are processes, not physical characteristics.
  #233  
Old 05-25-2019, 09:06 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
That's neither an example of teleology, nor of a function, anymore than writing 'fish' on a piece of paper is.
The computer was designed to display FISH when I type, just as the piece of paper was designed to display the marks that I put on it.
  #234  
Old 05-25-2019, 09:11 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
Originally Posted by Half Man Half Wit View Post
Just as there is nothing about the symbols FISH that relates in any way, shape or form to actual aquarian animals, there is nothing about a certain lamp that makes it mean 22 rather than 20, say. But with that difference, the computation performed will likewise differ.
Fish is also a card game, so the word could refer to that. Or it could be a person's name. The interpretation in this case is irrelevant, just as irrelevant as your alternate computations.
  #235  
Old 05-25-2019, 09:18 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by eburacum45 View Post
Fish is also a card game, so the word could refer to that. Or it could be a person's name. The interpretation in this case is irrelevant, just as irrelevant as your alternate computations.
No, because the interpretation *is* the computation. Everything else is just the physical evolution of the system. But what makes my box compute the sum of two numbers---which it unambiguously does, as I can actually use it to find the sum of two numbers I didn't know before, which is all that computing something means---is my interpreting its inputs and outputs properly.

Else, you're welcome to tell me what my box actually computes without interpretation that's distinct from its mere physical evolution, yet independent of interpretation. Of course, like everybody else spouting big talk about this in this thread so far, you're not gonna.
  #236  
Old 05-25-2019, 09:24 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
So if I type FISH but never look at the screen, no computation has been performed? Balderdash.
  #237  
Old 05-25-2019, 01:26 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,360
Quote:
Originally Posted by Half Man Half Wit View Post


That's not right, actually. The hardware on its own will tell you very little, in general, about what is computed---the reason being Rice's theorem: every non-trivial property of an algorithm is undecidable. (Just think about the question of whether the computation will halt: just knowing the hardware won't tell you in general---other than by explicit simulation.)
Rice's theorem only applies to non-trivial functions. If I'm understanding it, many of the computations of interest would be trivial in those terms.
I also think you are misunderstanding the halting problem. The fact that it is impossible to prove that no algorithm in a domain halts does not mean that it is impossible to prove that a specific algorithm in that domain halts.
And the hardware, in many cases, tells you a lot about what is being computed. A full adder, for example - though the interpretation "addition" is one of many logically equivalent ones.
Quote:
So basically, a computation is individuated by its mapping of inputs to outputs. After all, that only makes sense: a calculation likewise is just the result of some set of mathematical operations. The calculation doesn't differ regarding what process you used to arrive at the result; whether you've used the Babylonian method or the digit-by-digit method, in both cases, you've computed the square root.
Assuming you can prove them equivalent, either by exhaustively comparing inputs and outputs or by proving equivalence of the internals.
Quote:
I think the opposite---there is no fact of the matter regarding which function a circuit implements, unless it is suitably interpreted.
Is there such a thing as an uninterpreted output? If not, then what you say is trivially true. But the output domain is part of the definition of the function, and if you don't consider that to be interpreted then what you say is not true.
Quote:
If that were the case, then all functions with the same domain and codomain would be the same function, since there is always an isomorphism (a trivial permutation of elements) linking any two of these functions. Then, there would be no computing the square root, for instance, as computing the square root is trivially isomorphic to 'computing the square root + 1', and still, if we require a student to calculate a square root in an exam, and they take out their calculator to do so, we will mark them down if they calculated the square root + 1.
Clearly that's not what I said at all. The mapping is between the outputs for specific inputs, just as in your example. The output domain in your example is not defined, which is unimportant, since it clearly at least includes the listed outputs.
Now adding one to the output represents a functional transformation. If you want to call that a different function be my guest, but it is typically not considered as such.
Quote:
So no, isomorphism does not make the computations the same, without robbing the word 'computation' of all its usual meaning. Furthermore, if you intend to identify all such computations, then again, what you're left with will simply be a restatement of the box's physical evolution---of which lamps light up in response to which switches are flipped. In that case, as already pointed out, the claim that distinguishes computationalism from simple identity theory physicalism evaporates, and the theory developed to address grave problems with the latter (such as multiple realizability) collapses onto it.
For trivial examples without state you can derive a function and computation given an exhaustive listing of outputs and inputs.
Explain please why you think multiple realizability is an issue here. I've read the wiki article on it and it does not seem to be a resolved issue.

Quote:
Say my box has a defect in one lamp that changes the way it's wired up. Thus, its input-output behavior changes. Does it still compute the same function?

Say you've never seen a calculator before, and find the one with a defect display. Do you know what it computes, anyway?
The lamp is the interpretation. The computation takes place and produces outputs which are inputs to the lamp. Thus the computation is the same in either case. Now, if the lamp is defective in a way which maps several ALU outputs to a single lamp output, the the function has changed from the perspective of the total calculator, but that is because the function of the lamps have changed. If the defect preserves the 1-1 mapping of ALU outputs to lamp states, then the function has not changed in the total calculation, even if the interpretation has been changed.
Say a person is mute. Is the operation of his brain fundamentally different because he communicates in ASL versus a person who can speak?
Quote:
We recognize the calculator as defective, because it fails to fulfill its intended function. But, provided you don't want to argue that what computation a system implements does not teleologically depend on the intentions of its maker (a position that would be disastrous for the computational theory of mind), once you strip that intention away, a system that fails to compute some function may well successfully compute another---it's just that that's not one we're necessarily interested in.
Depends on how you define the calculator. As a whole, including the computation and the interpretation, it does not meet its specifications. But you'd be foolish to replace the ALU. Replacing the interpretation function, the lights, would restore it to its intended function. Unless you consider the lights to be fundamental to its computation, the computation is still being done even if the interpretation is defective.
Hell, turn a working calculator upside down. The interpretation of the output must change. Is the computation different?
  #238  
Old 05-25-2019, 01:26 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,955
Quote:
Originally Posted by Half Man Half Wit View Post
I have not backpedaled on anything, and frankly, your attempt to tar me with covertly shifting my position makes it seem like you're childishly trying to score a cheap 'win'.

My position, as outlined in the very first post I made here, is that the CTM, as entailing a claim that the mind is a computational entity, is indeed wrong, because there is at least one aspect of the mind that cannot be computational without lapsing into circularity, that of interpretation. I have, as soon as you started to claim that the falsity of CTM would undermine lots of current cognitive science, pointed out that that's fallacious (in post #103)---the fact that one can model certain aspects of the brain computationally does not depend on the claim that, as the SEP article puts it, 'the mind literally is a computing system'.

In other words, even if the mind is not (wholly) computational, computational modeling can be very useful. This is the position I've consistently held to during this whole discussion: CTM ('the mind literally is a computing system') wrong, computational modeling useful.
Well, thank you, I guess, for so clearly repeating the previous wrongness. Take that last sentence, which I agree lays out your position quite clearly. It is directly contradicted by the quoted bit from the Stanford Encyclopedia of Philosophy, which actually goes out of its way to very explicitly define classical CTM as being precisely the theory that the brain is literally a computer (though I think most theorists today would prefer to say that mental processes are literally computational), and that CTM is not some mere modeling metaphor, say the way we model climate systems to better understand them. We are under no illusions that climate systems are computational in any meaningful sense, but the proposition that mental processes are computational is mainstream in cognitive science.

If you don't believe the SEP you might note that CTM proponents in cognitive science describe CTM as the idea "that intentional processes are syntactic operations defined on mental representations" (i.e.- symbols). This view precisely reflects the computational paradigm, specifically the one set out by Turing, and that's no accident.

I'm sorry if I was getting snippy, but I'm not trying to score "cheap" debating points; I am frankly deeply annoyed that you dismiss as "wrong" and impossible one of the foundational ideas in cognitive science today, manage to somehow misinterpret what it means, and base this conclusion on what is essentially a trivial fallacy about what "computation" really is. It's well said that challenging well-established theories requires a correspondingly persuasive body of evidence. What you're provided is a silly homunculus fallacy.

Quote:
Originally Posted by Half Man Half Wit View Post
That's why I chose my example such that both f and f' follow from the same consistent interpretation of voltages and gates, yet still implementing different computations. So no, having a consistent assignment is not enough to specify one single computation.

Of course, the 'consistency'-requirement is wholly arbitrary. I could easily consider one switches' 'u' to mean '1', and another's to mean '0'. I could also just flip the interpretations of the lamps, leading to yet more computations. And so on. The key factor is always, as repeated at the top of this post, whether I can actually use a system to perform a computation. If I can, then, in my opinion, claiming that the system doesn't really implement that computation is meaningless sophistry.
Show me where I said the system "doesn't implement" either your f or f' computation. I said it implements both of them. More precisely, it performs a computation that solves both classes of problem.

On your point that both functions (in your box example) obtain with the same interpretation of bit values, that's another sleight-of-hand performance. Yes, in the case of logic gates, one needs only a consistent view of the mapping of voltage levels to bit values to define their function. In your box example, the nature of the problem being solved also requires a consistent view of the positional values of the bits, as in the binary number system vs the one you invented. A different view leads to a different (but computationally equivalent) description of the problem being solved. This is what I meant by the "class of problem" distinction.


One finds the same distinction in the AND-gate and OR-gate argument. One can trivially observe that the two gates are fundamentally the same: both have AND-behavior and OR-behavior. They are indistinguishable from AND gates if both inputs are the same, and if the inputs are different, both consistently produce either H or L. They're effectively doing the same computation, and how we define H and L (TRUE and FALSE, or vice-versa) determines what we call them (i.e.- what problem they're solving).

Last edited by wolfpup; 05-25-2019 at 01:30 PM.
  #239  
Old 05-25-2019, 02:10 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,360
Quote:
Originally Posted by Half Man Half Wit View Post
It always gives me pause that nobody ever acknowledges the sheer insanity of the position that a simulation literally creates some sort of mini-pocket universe with its own laws of physics and the like. Think about what that would entail: somehow, the right pattern of voltages in a desk-sized apparatus suffices to conjur up, say, an entire solar system, complete with things like gravity, radiation, electromagnetic fields and all---where, even, does the energy to create all that come from?---which nevertheless is somehow hermetically closed off from us, except for some small window which somehow allows us to peek in on it, while still screening off all that gravitation and so on.

The universe from the simulation is thus both connected to ours---we can, after all, look in on it, which is the whole point of the simulation---and completely disconnected---there are no effects from gravity outside of the simulation.
Gravity outside the simulation won't affect inside the simulation and vice versa.
However, I agree with you about the absurdity of the "it's simulations all the way down" hypothesis and that is because of energy. Information takes energy, and a full simulation of a universe requires energy for the information contained in that universe. (Which accounts for any possible compression, of course.) A simulation inside the simulation requires energy for that simulation as well as energy for the rest of the universe being simulated. Multiple universe simulations are even worse. Plus, our simulation time step seems to be Planck time, and that will take at least Planck time to simulate (actually much longer) so you'd need a really long research grant.
A full universe simulation requires that we are able to simulate intelligence, but simulating intelligence does not require a full universe simulation, so I don't think we have to worry about this issue.
  #240  
Old 05-25-2019, 04:44 PM
eschereal's Avatar
eschereal is offline
Guest
 
Join Date: Aug 2012
Location: Frogstar World B
Posts: 16,466
Quote:
Originally Posted by begbert2 View Post
Have you ever played a video game? One of the new-fangled ones where you can shoot things - Space Invaders, Asteroids, that kind of thing. In those you can push buttons to move your little digital ship around and make it shoot pixels. And if one of those pixels you shot happens to run into the little cluster of pixels representing an asteroid or an alien, then the behavior of that thing changes - if memory serves it turns into a little drawing of lines coming out of a point and then disappears.
There is a vaguely interesting matter of interpretation here. The original Asteroids (the big dedicated unit that was in arcades and convenience stores) game did not use pixels at all. Its graphics were scribed as lines under direct vector control of the CRT electron beam. You have probably seen personal computer renditions that do in fact use pixels because vector displays have become essentially non-existent.

The game itself has not changed, just the manner in which it is displayed, which means the modern version is very clearly a simulation. I can look at Asteroids on a computer and see that it has only a functional resemblance to the original but the graphics, to my Luddite eye, are never as pleasing.

So how accurate does a simulation have to be to be indistinguishable from the original? Is there any wiggle room?
  #241  
Old 05-25-2019, 08:26 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by RaftPeople View Post
You did respond, but it had phrases like "if your..." which sounded like you were exploring hypothetical angles a person might consider, but I'm trying to clearly understand what you are considering a computation.

For example, I was thinking it would be possible to answer the following questions with just a yes or no regarding computation:
1 - A lookup table that maps 0110011 to 0100010 (from your example) - is this an example of a computation? Yes or No

2 - A simple circuit that can only perform the mapping it has been built to perform, and it happens to map 0110011 to 0100010 - is this an example of a computation? Yes or No

3 - My laptop computer that is running a program that maps 0110011 to 0100010 - is this an example of a computation? Yes or No
Re-posting due to trying to understand the boundaries of computation from your perspective wolfpup.

I believe from HMHW's perspective none of those by themselves are computations until there is an interpretation regarding the symbols and the transformation, just trying to get a handle on if you agree or disagree with that, and if there is a difference between those scenarios.
  #242  
Old 05-26-2019, 12:57 AM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,360
Quote:
Originally Posted by RaftPeople View Post
Re-posting due to trying to understand the boundaries of computation from your perspective wolfpup.

I believe from HMHW's perspective none of those by themselves are computations until there is an interpretation regarding the symbols and the transformation, just trying to get a handle on if you agree or disagree with that, and if there is a difference between those scenarios.
Before I retired I put together a system that gathered manufacturing data from halfway around the world, downloaded it, processed it, and built web pages based on it. Since I built them whenever we got new data, three times a day in some cases, often no one ever looked at the web pages built.

I wonder if HMHW agrees that my program did computations, despite no one being there to interpret the results.
  #243  
Old 05-26-2019, 02:00 AM
neutro is offline
Guest
 
Join Date: Apr 2019
Location: Redmond, WA
Posts: 118
Ok, I've now caught up on this thread. It took a while to read everything.

Is there anyone here who actually understands the argument from HWHM well enough to re-explain it to me? Is there anyone who agrees?

I'm honestly not sure what it has to do with a brain either. A few lights and switches? At best that sounds like a small computational unit. The question of what it computes is going to depend on it's input. The computer program. We've talked a lot about what the hardware setup might be, but what's the actual information flowing into the brain and out of the brain? The brain has physical inputs and physical outputs. It's programmed with a complex set of software that's built into the system and that constantly evolves itself based on new inputs.

So we have a bunch of sensors converting signals into the brain. Some kind of computational loop. Consciousness is just the top level control of that loop, the program that decides the importance of the results from the rest of the units in the brain. That program itself can be more or less complex (see a human vs a cat) but it's the same thing.

I don't see any reason why the computation part in between the inputs and the outputs can't be a computer program that simulates the brains computation. Use a video camera to gather information and flow them into a simulated cortex. It does it's computations, simulated or real and then sends signals to the output (e.g. muscles).

Why are we positing the need for some external observer? It seems superfluous to the whole thing. Layers of neural nets with inputs/outputs and some great software that has evolved over many generations is what makes a brain.

Last edited by neutro; 05-26-2019 at 02:04 AM.
  #244  
Old 05-26-2019, 03:49 AM
Half Man Half Wit's Avatar
Half Man Half Wit is offline
Guest
 
Join Date: Jun 2007
Posts: 6,829
Quote:
Originally Posted by Voyager View Post
Rice's theorem only applies to non-trivial functions. If I'm understanding it, many of the computations of interest would be trivial in those terms.
Rice's theorem applies to non-trivial properties of computable functions. Non-trivial properties are all properties that don't apply to all functions in the class, i. e. those properties that can be used to single out a given (set of) functions. The non-trivial properties in my example would be something like 'computes the sum of two input numbers between 0 and 3'.

Quote:
I also think you are misunderstanding the halting problem. The fact that it is impossible to prove that no algorithm in a domain halts does not mean that it is impossible to prove that a specific algorithm in that domain halts.
This is true, but I don't see that my argument depends on it. As long as you've got some fixed, algorithmic way of trying to decide whether a given piece of hardware implements a terminating procedure (or any particular procedure, in fact), there will be some pieces of hardware it doesn't work on. And if you've got something else, then computationalism is already wrong.

Quote:
Assuming you can prove them equivalent, either by exhaustively comparing inputs and outputs or by proving equivalence of the internals.
Sure, but the point was that comparing the input/output behavior suffices.

Quote:
Is there such a thing as an uninterpreted output? If not, then what you say is trivially true. But the output domain is part of the definition of the function, and if you don't consider that to be interpreted then what you say is not true.
Well, in principle, only after interpretation is there a well-defined output domain. I could just as easily interpret my box's lamps to represent bits of different value, or not bits at all. Then, the codomain of the function being computed varies.

But really, the important point here is that the codomain isn't lamp states; the output of the function isn't something like 'on, off, on'---that's its physical state, and again, conflating physical and computational entities just ends up trivializing computationalism. Rather, the codomain is given by what those lamp states represent.

Quote:
Clearly that's not what I said at all. The mapping is between the outputs for specific inputs, just as in your example. The output domain in your example is not defined, which is unimportant, since it clearly at least includes the listed outputs.
Now adding one to the output represents a functional transformation. If you want to call that a different function be my guest, but it is typically not considered as such.
Of course a functional transformation leads to a different function. Provided a suitable cardinality of the domains, there always exists a function f'' for any two functions f and f', such that f' = f o f'', where the 'o' denotes function composition. That doesn't make f and f' equal; if it did, then again, all functions with the same domain and codomain would be equal. My adding 1 is just such a transformation.

Quote:
Explain please why you think multiple realizability is an issue here. I've read the wiki article on it and it does not seem to be a resolved issue.
I appreciate that it's a long thread, but I've pointed out the issue before, and we're not gonna make any progress here if I just keep repeating myself over and over.

Quote:
Originally Posted by Half Man Half Wit View Post
Computationalism was developed as an elaboration on functionalism, which was proposed to counter an attack that (many think) dooms identity physicalism, namely, multiple realizability. A state of mind can't be identical to a neuron firing pattern if the same mental state can be realized in a silicon brain, for example, since a silicon brain's activation pattern and a neuron firing pattern are distinct objects. So you have a contradiction of the form A = B, A = C, but B != C.

To answer this objection, the idea was developed that mental states are identical not to physical, but to functional properties, and, on computationalism, particularly computational functional properties. If computationalism thus collapses onto identity physicalism---which it does, if you strip away the distinction between f and f'---computationalism fails to safe physicalism from the threat of multiple realizability.
If we thus identify all the computations a system performs with its physical evolution, which is what all this considering different functions to be really 'the same' inevitably leads to, then computationalism just looses everything that makes it a distinct theory of the mind, and collapses to identity theory (not to mention that by then, we've long since abandoned any semblance between the notion of 'computation' used in that context and the usual notion of computation).

Quote:
Originally Posted by Voyager View Post
The lamp is the interpretation. The computation takes place and produces outputs which are inputs to the lamp. Thus the computation is the same in either case. Now, if the lamp is defective in a way which maps several ALU outputs to a single lamp output, the the function has changed from the perspective of the total calculator, but that is because the function of the lamps have changed. If the defect preserves the 1-1 mapping of ALU outputs to lamp states, then the function has not changed in the total calculation, even if the interpretation has been changed.
I have no idea what 'the lamp is the interpretation' is supposed to mean. The lamp is just a convenient visualization of the output voltage level, because we can't see that with the unaided eye. If you ask your research assistant for the outcome of a given computation, would you be happy with a report of the state of the output register?

Quote:
Depends on how you define the calculator. As a whole, including the computation and the interpretation, it does not meet its specifications. But you'd be foolish to replace the ALU. Replacing the interpretation function, the lights, would restore it to its intended function.
The lights are just intended as a convenient visualization of the internal state. We can't directly read ALU outputs. But for the purposes of the argument, it's entirely irrelevant if we assume that we can. So now the output is a pattern of high and low voltages. What has been computed? Voltages? Or do you hold that arithmetic has been done? But then, how do the voltages connect with arithmetic?

Quote:
Originally Posted by wolfpup View Post
Well, thank you, I guess, for so clearly repeating the previous wrongness. Take that last sentence, which I agree lays out your position quite clearly. It is directly contradicted by the quoted bit from the Stanford Encyclopedia of Philosophy, which actually goes out of its way to very explicitly define classical CTM as being precisely the theory that the brain is literally a computer (though I think most theorists today would prefer to say that mental processes are literally computational)
I seriously don't get what your issue is, here. I agree that the CTM holds literally that the mind is a computer. Incidentally, if there's no room for non-computational aspects of the mind, then that means you're not a proponent of CTM:
Quote:
Originally Posted by wolfpup View Post
"Wholly computational" was manifestly never my claim, and I was clear on that from the beginning. And if it had been, I'd certainly never lean on Fodor for support, as he was one of the more outspoken skeptics about its incompleteness, despite his foundational role in bringing it to the forefront of the field.
But no matter. My point is that CTM, as such, is wrong, but that the use of computational modeling in cognitive science is independent of its truth. (If, on the other hand, you're merely saying that CTM is the currently accepted, dominant paradigm in cognitive science, then I readily agree---I already acknowledged that in the post where I claimed Putnam dismantled it, it's just that the rest of the world is slow on the uptake; but lots of people believing something doesn't make it right.) Just like the notions of entropy etc. didn't need to be revised once thermodynamics was lifted from its dependence on caloric and put upon a solid foundation in statistical mechanics, computational models of cognition can still yield valuable insights even if the mind as a whole isn't literally a computer.

Quote:
Originally Posted by wolfpup View Post
[...] CTM is not some mere modeling metaphor, say the way we model climate systems to better understand them. We are under no illusions that climate systems are computational in any meaningful sense
Right. So suppose that somebody had proposed the CTW, the theory that the weather literally is the computation performed by the atmosphere. We can consistently reject that theory and believe that computational modeling of the weather is useful. That's the same thing I'm doing with the CTM, plain and simple. There's simply no reason to suppose that anything of the successes of cognitive science needs to be thrown out upon repudiating CTM; a computational model of vision won't cease working when the researcher stops believing that the mind literally is a computation.

Quote:
I'm sorry if I was getting snippy, but I'm not trying to score "cheap" debating points; I am frankly deeply annoyed that you dismiss as "wrong" and impossible one of the foundational ideas in cognitive science today, manage to somehow misinterpret what it means, and base this conclusion on what is essentially a trivial fallacy about what "computation" really is. It's well said that challenging well-established theories requires a correspondingly persuasive body of evidence. What you're provided is a silly homunculus fallacy.
You're conflating evidence and argument. The CTM is a philosophical stance; it's a metaphysical hypothesis, and as such, not (directly) empirical (although of course, necessarily subject to revision once empirical discoveries make a metaphysics in contradiction with it more attractive). It must stand or fall on its own internal logic; and one sound argument against it is all it takes. There's no 'weight of arguments' that needs to be considered, so even if the argument is 'silly', if it's right, the CTM is wrong, and that's that.

Quote:
Show me where I said the system "doesn't implement" either your f or f' computation. I said it implements both of them. More precisely, it performs a computation that solves both classes of problem.
That isn't the impression I got, say, from quotes such as this one:
Quote:
Originally Posted by wolfpup View Post
What is that computation, you ask? Let's ask a hypothetical intelligent alien who happens to know nothing about number systems. The alien's correct answer would be: it's performing the computation that produces the described pattern of lights in response to switch inputs. How do we know that this is a "computation" at all and not just random gibberish? Because it exhibits what Turing called the determinacy condition: for any switch input, there is deterministically a corresponding output pattern. Whether we choose to call it a binary adder or the alien calls it a wamblefetzer is a matter of nomenclature and, obviously, a distinction of utility.
There, you seem to be claiming that all the box does, by way of computation, is producing lamp patterns. But no matter, I'll take you on your word that what you meant is 'it computes all the possible functions you can get via interpreting its inputs'.

But then, there remains the matter that no system ever just computes, say, the sum of two inputs. In CTM, as usually understood, the mind is a program in analogy to one that computes the sum of two numbers---in analogy to one (but not both) of my functions f and f'. Again, otherwise, the mind would just be a given neuron firing pattern---but that's just the identity theory.

This view is also somewhat in tension with the view you articulated earlier:

Quote:
Originally Posted by wolfpup View Post
A Turing machine starts with a tape containing 0110011. When it's done the tape contains 0100010. What computation did it just perform?

My answer is that it's one that transforms 0110011 into 0100010, which is objectively a computation by definition, since it is, after all, a Turing machine exhibiting the determinacy condition -- even if I don't know what the algorithm is.
On that view, my f and f' are different computations, since there are different TMs (over the alphabet of decimal numbers) performing them.

Quote:
A different view leads to a different (but computationally equivalent) description of the problem being solved. This is what I meant by the "class of problem" distinction.
So before I respond to it in detail, let me try and see if I get the gist of your objection. The idea, it seems to me, is that the box basically just provides a kind of engine, which does the computational work; different users can come to that engine, and use the work it performs to solve distinct problems. So you propose to identify 'computation' with the work the engine does, and, by extension, 'mind' with the same sort of computational work a brain does. Is that an accurate summary?

Last edited by Half Man Half Wit; 05-26-2019 at 03:51 AM.
  #245  
Old 05-26-2019, 08:17 AM
chaidragonfire is offline
Guest
 
Join Date: May 2019
Posts: 9
All science fiction put aside..............

This is actually possible, as the brain uses electrical impulses to stimulate certain areas of the brain where different types of memory and emotional states are stored.

Emotional states are nothing more than reactions to stimulus chemically induced by your brain functions.......which are turned into electrical impulses that cause reaction in your mood and body.

**The stated electrical impulses have nothing to do with man-made electricity, these are naturally occurring stimuli from your brain........for those of you who never paid attention in biology class.

That being said.........

Computers and their parts are nothing more than electronic gizmos for creating and storing electrical impulses.

So the problem with this is---

It should be easy enough to store these electrical impulses, but the methods used to extract them are not sufficient enough at this point in time.

Extracting these impulses while a body is going through the dying stage is not something that would translate into a proper download, it would be corrupted from the changes the body is going through while dying. Therefore any outcome from storing said impulses for later use would be catastrophic and simply unusable.

And even though science is advanced enough to store these brain impulses, it is not sufficiently advanced enough to understand how to create a program to differentiate the billions of different impulses the brain gives, and separate them into memories, feelings, emotions, etc..... At this point it would all just be a jumble of electronic impulses.

And even at that, human science still doesn't understand completely how the brain works.

Last edited by chaidragonfire; 05-26-2019 at 08:18 AM.
  #246  
Old 05-26-2019, 12:42 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by neutro View Post
Is there anyone here who actually understands the argument from HWHM well enough to re-explain it to me? Is there anyone who agrees?
If we look at the question of downloading or simulating consciousness, we must be able to understand how to create consciousness in the first place to be able to download or simulate specific instances.

One argument that has been put forth in the past is that consciousness/mind can be modeled/created with computation.

I believe the definition of computation includes the requirement that we understand or interpret the symbols involved. In other words, if you just take a physical system that is performing transformations, that by itself is not really what they mean by computation. Computation is the next level of abstraction on top of that where we have assigned meaning and value to the symbols and transformations involved.

So computation, I believe, is independent of the physical implementation, as long as the physical implementation can be mapped correctly to the set of symbols and transformations.


So HMHW pointed out that there is a problem:
1 - Computation really does require interpretation because the same setup of inputs+transformations+outputs can map to multiple different valid sets of computations (meaning it could be doing a math problem, or it could be translating chinese, or it could be modeling traffic flow)

2 - If computation requires interpretation to determine which computation is actually being performed, then how can we say the our computational model running on the computer will really create consciousness, if we interpret it one way it could map to something like a tornado simulation and if we interpret it the right way we get consciousness, but how does the running program know which way it's supposed to interpret it. How does the running program know that it's not running a tornado simulation and that we want it to be conscious?


I don't personally have a strong belief about any of these arguments and positions because it's a difficult problem and there are valid pros and cons all over the place. I'm trying to learn and understand.

Having said that, I don't see how to get around HMWH's argument. I used to gloss over these problems and kind of blindly assume a computer could create consciousness no problem because the brain is just transforming stuff, but the more I'm exposed to these types of debates, the more it becomes clear that it's a difficult problem with no obvious answer.
  #247  
Old 05-26-2019, 01:17 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by chaidragonfire View Post
It should be easy enough to store these electrical impulses, but the methods used to extract them are not sufficient enough at this point in time.

Extracting these impulses while a body is going through the dying stage is not something that would translate into a proper download, it would be corrupted from the changes the body is going through while dying. Therefore any outcome from storing said impulses for later use would be catastrophic and simply unusable.
The impulses represent the current processing, it doesn't hold all of the stored information (memories etc.), you would need to extract physical structure (synapses), epigenetic alterations that happen in cells to due to learning to properly model the maintenance of the synapse over time, you would need to figure out how to extract the learned temporal sequences from within purkinje cells, and many many more details that would influence the simulation.
  #248  
Old 05-26-2019, 03:04 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,955
Quote:
Originally Posted by RaftPeople View Post
So HMHW pointed out that there is a problem:
1 - Computation really does require interpretation because the same setup of inputs+transformations+outputs can map to multiple different valid sets of computations (meaning it could be doing a math problem, or it could be translating chinese, or it could be modeling traffic flow)
No, it couldn't. Or, to put it more precisely, if the same mapping of inputs to outputs solved all three problem classes simultaneously, then they are all computationally equivalent by definition. Neither problem is harder than any other, or takes longer to solve, or is different in any other discernible way, because they are (computationally) all exactly the same problem. This is not, however, the kind of fortuitous coincidence one finds in real-world systems of non-trivial complexity.
  #249  
Old 05-26-2019, 03:04 PM
eschereal's Avatar
eschereal is offline
Guest
 
Join Date: Aug 2012
Location: Frogstar World B
Posts: 16,466
Quote:
Originally Posted by neutro View Post
So we have a bunch of sensors converting signals into the brain. Some kind of computational loop. Consciousness is just the top level control of that loop, the program that decides the importance of the results from the rest of the units in the brain. That program itself can be more or less complex (see a human vs a cat) but it's the same thing.
This is not correct, at least in terms of what we understand about reality and how information processing works.

Imagine for a moment that you have written a very large, elaborate program that, on a fast enough computer, can create a precise real-time simulation of the Battle of Agincourt, down to the last rat digging in the French army's supply wagon. You have tens of thousands of processes covering the actions of each of the participants, the weather, the field conditions, the horses, the flights of arrows, all in such excruciating detail that you can watch it transpire and make changes that might affect results, and even change the overall outcome.

Now go inspect the object code of the program. Over here is the central process that manages all those thousands of other processes. What, exactly, is it doing?

It is taking symbolic data and handling it in a prescribed manner, as directed by the user. It has no special "knowledge" of what those data symbols mean, and really, is only the "top-level" process in the source code hierarchy. In fact, many of the lower-level processes are significantly more sophisticated than the central process.

And when you look at the object code, its composition is uniform. All of the processes all doing fundamentally the same thing: moving data around and adjusting it in a prescribed and directed manner. There is no way to say that any one process is logically superior to another.

In other words, in terms of the computational theory of the mind, the locus of self-awareness cannot be identified, because the brain, like or computer, has an underlying uniformity to its composition. There does not appear to be a single process, or even a small cluster of core processes, to which what we know as consciousness can be ascribed.
  #250  
Old 05-26-2019, 04:05 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,955
Quote:
Originally Posted by Half Man Half Wit View Post
I seriously don't get what your issue is, here. I agree that the CTM holds literally that the mind is a computer. Incidentally, if there's no room for non-computational aspects of the mind, then that means you're not a proponent of CTM:
That makes no sense. If it were true, virtually all proponents of CTM in cognitive science could be dismissed as not really proponents of CTM. To quote Fodor more fully (from the introduction to The Mind Doesn't Work That Way):
[The computational theory of mind] is, in my view, far the best theory of cognition that we've got; indeed, the only one we've got that's worth the bother of a serious discussion. There are facts about the mind that it accounts for and that we would be utterly at a loss to explain without it; and its central idea -- that intentional processes are syntactic operations defined on mental representations -- is strikingly elegant. There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition.

But it hadn't occurred to me that anyone could suppose that it's a very large part of the truth; still less that it's within miles of being the whole story about how the mind works.
Quote:
Originally Posted by Half Man Half Wit View Post
But no matter. My point is that CTM, as such, is wrong, but that the use of computational modeling in cognitive science is independent of its truth.
Again, it has nothing to do with "modeling" metaphors. It should be clear enough from its definition as syntactic operations on symbolic representations that CTM refers to a literal computational paradigm as an explanatory theory of mental processes. Even in fields like computational neuroscience, where computational modeling is used extensively, the models are only useful to the extent that they can be empirically validated through psychological or biological experiments.
Quote:
Originally Posted by Half Man Half Wit View Post
You're conflating evidence and argument. The CTM is a philosophical stance; it's a metaphysical hypothesis, and as such, not (directly) empirical (although of course, necessarily subject to revision once empirical discoveries make a metaphysics in contradiction with it more attractive). It must stand or fall on its own internal logic; and one sound argument against it is all it takes. There's no 'weight of arguments' that needs to be considered, so even if the argument is 'silly', if it's right, the CTM is wrong, and that's that.
That's flat-out wrong, once again. I'm conflating nothing. CTM is not some vague "metaphysical hypothesis", it's an explanatory theory grounded in experimental evidence. For example, evidence for the syntactic-representational view of mental imagery as opposed to the spatially displayed or depictive models.
Quote:
Originally Posted by Half Man Half Wit View Post
That isn't the impression I got, say, from quotes such as this one:


There, you seem to be claiming that all the box does, by way of computation, is producing lamp patterns. But no matter, I'll take you on your word that what you meant is 'it computes all the possible functions you can get via interpreting its inputs'.
Geez, you don't have to take my word for it, I explicitly stated it earlier, right here: "... the 'computation' it's doing is accurately described either by your first account (binary addition) or the second one, or any other that is consistent with the same switch and light patterns. It makes no difference. They are all exactly equivalent."
Quote:
Originally Posted by Half Man Half Wit View Post
This view is also somewhat in tension with the view you articulated earlier:



On that view, my f and f' are different computations, since there are different TMs (over the alphabet of decimal numbers) performing them.
I'm not sure what new sleight-of-hand you're trying out, but I don't understand what you mean by "alphabet of decimal numbers". If what you're trying to imply is that the computations are different because they produce different numerical results, this is a circular argument referring back to the very issue we're debating about assigning semantics to the symbols, wherein I've already addressed multiple times the question of why the computations are self-evidently exactly the same.

Quote:
Originally Posted by Half Man Half Wit View Post
So before I respond to it in detail, let me try and see if I get the gist of your objection. The idea, it seems to me, is that the box basically just provides a kind of engine, which does the computational work; different users can come to that engine, and use the work it performs to solve distinct problems. So you propose to identify 'computation' with the work the engine does, and, by extension, 'mind' with the same sort of computational work a brain does. Is that an accurate summary?
The first part isn't wrong, it's just a strangely bizarre way of looking at it, since we rarely think of computations as general-purpose "engines" applicable to multiple classes of problem according to the semantics we assign to the symbolic outputs. We don't think of it that way because, aside from your trivially contrived example, it doesn't actually happen in the real world in non-trivial systems.

And I don't really see how that connects with the second part, expressed in the last two sentences. My contention is that CTM is a valuable explicatory theory for many cognitive processes, and that the homunculus argument is a ridiculously frivolous metaphysical objection to one of the most foundational and empirically grounded theories of cognition. Quite frankly, I feel the same way here as I might when trying to discuss the details of a paper on climate change projections with someone who turns out to be a hardline denialist and holds the belief that there's nothing to project because CO2 has absolutely no effect on climate. In fact Fodor put it extremely well, taking it for granted after a career dedicated to the study of cognition that without CTM there is no theory of cognition "that's worth the bother of a serious discussion".

Last edited by wolfpup; 05-26-2019 at 04:09 PM.
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 03:56 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright 2018 STM Reader, LLC.

 
Copyright © 2017