Reply
 
Thread Tools Display Modes
  #251  
Old 05-26-2019, 04:39 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,157
Quote:
Originally Posted by Half Man Half Wit View Post

Sure, but the point was that comparing the input/output behavior suffices.
In a circuit with state comparing I/O behavior is impractical if not impossible. You can prove equivalence if you can see inside by partitioning the circuit into memory and non-memory parts.
Quote:
Well, in principle, only after interpretation is there a well-defined output domain. I could just as easily interpret my box's lamps to represent bits of different value, or not bits at all. Then, the codomain of the function being computed varies.
There are two output domains - that of the ALU itself and that of the lamps.
Quote:
But really, the important point here is that the codomain isn't lamp states; the output of the function isn't something like 'on, off, on'---that's its physical state, and again, conflating physical and computational entities just ends up trivializing computationalism. Rather, the codomain is given by what those lamp states represent.
Do you have a different calculator if the lamps are programmed to display roman numerals? Greek or Hebrew letters? You are combining two different functions here - the mapping from ALU inputs to outputs and the mapping from outputs to lamp states.
Quote:
Of course a functional transformation leads to a different function. Provided a suitable cardinality of the domains, there always exists a function f'' for any two functions f and f', such that f' = f o f'', where the 'o' denotes function composition. That doesn't make f and f' equal; if it did, then again, all functions with the same domain and codomain would be equal. My adding 1 is just such a transformation.
IIRC, translformations are special cases of composition and are not the same as composition in general.

Quote:
I have no idea what 'the lamp is the interpretation' is supposed to mean. The lamp is just a convenient visualization of the output voltage level, because we can't see that with the unaided eye. If you ask your research assistant for the outcome of a given computation, would you be happy with a report of the state of the output register?
Hell yes. People spend millions of bucks on machines which do the trivial to you task of taking the output of a computation and making it accessible. What voltage level represents a 1 and what a 0? When can you decide what the output voltage is? Even in a chip the output of a computation must go through a buffer to make it visible to the outside world. The mapping from output voltage to the convenient visualization is much more complex than you seem to think.

Quote:
The lights are just intended as a convenient visualization of the internal state. We can't directly read ALU outputs. But for the purposes of the argument, it's entirely irrelevant if we assume that we can. So now the output is a pattern of high and low voltages. What has been computed? Voltages? Or do you hold that arithmetic has been done? But then, how do the voltages connect with arithmetic?
Let's say I answer arithmetic. I run an addition on my calculator, and see an answer. Then I unplug the lamps, and enter the same inputs and function. I see nothing. Are you saying that arithmetic is not being done any more?
This is a non-trivial issue. Does a brain that has been disconnected from being able to output anything still thinking? If Hawking could no longer even blink, would he still be thinking?
Now, I might not answer arithmetic, since that is presupposing an interpretation, whereas it would be more accurate to just give the mappings of binary numbers, and even more accurate to give voltage values and a timing diagram. The interpretation of voltage levels as just 1s and 0s becomes a bit dangerous when you are talking about high speeds, low voltages and very small feature sizes.
  #252  
Old 05-26-2019, 10:50 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Quote:
You did respond, but it had phrases like "if your..." which sounded like you were exploring hypothetical angles a person might consider, but I'm trying to clearly understand what you are considering a computation.

For example, I was thinking it would be possible to answer the following questions with just a yes or no regarding computation:
1 - A lookup table that maps 0110011 to 0100010 (from your example) - is this an example of a computation? Yes or No

2 - A simple circuit that can only perform the mapping it has been built to perform, and it happens to map 0110011 to 0100010 - is this an example of a computation? Yes or No

3 - My laptop computer that is running a program that maps 0110011 to 0100010 - is this an example of a computation? Yes or No
Re-posting due to trying to understand the boundaries of computation from your perspective wolfpup.

I believe from HMHW's perspective none of those by themselves are computations until there is an interpretation regarding the symbols and the transformation, just trying to get a handle on if you agree or disagree with that, and if there is a difference between those scenarios.
I know I'm a broken record on this one, hoping you will respond wolfpup
  #253  
Old 05-26-2019, 11:48 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
No, it couldn't. Or, to put it more precisely, if the same mapping of inputs to outputs solved all three problem classes simultaneously, then they are all computationally equivalent by definition.
Could you provide the definition you're referring to here? The only definition of computational equivalence I know is the presence of a polynomially-complex reduction between two problems, but that's not something that's gonna help here. It's also at variance with your earlier stance that two computations differ if the TMs that perform them differ (i. e. if their machine tables differ).

Quote:
Originally Posted by wolfpup View Post
That makes no sense. If it were true, virtually all proponents of CTM in cognitive science could be dismissed as not really proponents of CTM. To quote Fodor more fully (from the introduction to The Mind Doesn't Work That Way):
You should maybe expand your reading beyond Fodor. The simple issue is, if the mind includes aspects that aren't computational, then it can't be true that the mind literally is a computing system. Yet, the latter is a claim that's accepted by most (practically all) proponents of computationalism.

Quote:
Again, it has nothing to do with "modeling" metaphors. It should be clear enough from its definition as syntactic operations on symbolic representations that CTM refers to a literal computational paradigm as an explanatory theory of mental processes.
I agree that that's the claim CTM makes, but CTM is false, so why should I be beholden to that claim? You can't seriously hold that CTM must be right, since it's the only game in town; and if there's thus room for CTM to be false, then it must be the case that a future theory replacing it will still have to account for CTM's successes, or more accurately, for the successes of cognitive science achieved while holding CTM to be the dominant paradigm, just like how any science succeeding caloric theory still had to account for caloric theory's successes. And on that future paradigm, it won't, of course, be true that the mind literally is a computing system, and hence, it'll have to explain these successes by the fact that non-computational systems can still be computationally modeled. Unless you want to argue that because it's the current dogma, it must be right, you must allow for the possibility that its claim of literal identification of the mind with a computer could come out wrong.

Besides, it's not nearly so clear-cut a case as you (by proxy of Fodor) make it out to be that CTM is 'the only game in town'. I've already pointed to IIT as an example of a 'scientifically respectable' theory on which CTM is straightforwardly false (another one of these points you keep 'missing'), and there are many other approaches, some of which may be compatible with CTM, but none of which are wedded to the claim that the mind is a computer---Friston's free energy minimization and other Bayesian/predictive coding approaches, Edelmann's neural darwinism, Baars' global workspace, higher-order thought theory, and so on are all approaches that may be compatible with a computational brain, but that don't really depend on it; models by Penrose/Hameroff, and Bringsjord/Zenzen, explicitly deny the possibility of a computational mind.

Quote:
That's flat-out wrong, once again. I'm conflating nothing. CTM is not some vague "metaphysical hypothesis", it's an explanatory theory grounded in experimental evidence.
CTM is a hypothesis on the nature of mental states and properties---namely, that they are functional, more accurately, computationally functional, in nature. As such, it stands in conflict with theories on which mental states/properties are non-physical, or physical, but non-functional, or intrinsic, or neutral, and so on---thus, as a claim on the ontology of mental states, it's explicitly metaphysical.

Quote:
I'm not sure what new sleight-of-hand you're trying out, but I don't understand what you mean by "alphabet of decimal numbers".
I'm merely striving to reduce the wriggle room. You did earlier on claim that a different Turing machine means it's a different computation, so I point out that my two functions are realized by different Turing machines (which take numbers in the decimal system as input on their tape, and output numbers in the decimal system---hence, 'over the alphabet of decimal numbers'). You now seem to posit some sort of equivalence between different Turing machines which so far seems to boil down to 'whatever HMHW says is different, in fact isn't'; hence, I'm trying to tease out your actual meaning there.

Quote:
The first part isn't wrong, it's just a strangely bizarre way of looking at it, since we rarely think of computations as general-purpose "engines" applicable to multiple classes of problem according to the semantics we assign to the symbolic outputs. We don't think of it that way because, aside from your trivially contrived example, it doesn't actually happen in the real world in non-trivial systems.
The thing is that I've provided two functions which are manifestly different computations on any formalization of computation I'm familiar with, which, however, are 'self-evidently exactly the same' to you. So all I'm doing is trying to get a grip on what, exactly, the word 'computation' means to you. So maybe start there: what, exactly, are computations, and how are they individuated? When do I know that one computation is different from another?

I won't respond to your attempt at trying to slander me as equivalent to a climate change denialist just because I have the gall of disagreeing with your favorite cognitive science paradigm.

Quote:
Originally Posted by Voyager View Post
In a circuit with state comparing I/O behavior is impractical if not impossible.
It's indeed impossible: Moore's theorem implies that you can never obtain its exact functioning by mere experimentation on the box.

Quote:
Do you have a different calculator if the lamps are programmed to display roman numerals? Greek or Hebrew letters? You are combining two different functions here - the mapping from ALU inputs to outputs and the mapping from outputs to lamp states.
You have a page of English text. Is a page of Hebrew text the same, or different? Of course, as such, the question isn't answerable: one could be a translation of the other. But in general, a difference in the alphabet makes a difference in the machine function.

My point is a different one, though. Say you have a page in an unknown alphabet, written in an unknown language: is there a single thing it can mean? That is, is there an effective procedure for deriving its meaning?

Of course, there can't be. You can interpret it various ways, using for instance a one-time pad key to translate it into a language you're familiar with. But which is the right meaning? There simply is no fact of the matter.

It's exactly the same with computations. Sure, there may be an intended meaning, and likewise, an intended computation; but that can't well be a criterion for what computation a system performs (and much less for whether a system instantiates a mind). So I propose that a system computes whatever you can use it to compute; that's reasonably simple, and covers every case of actual computation that is performed. It's just that it doesn't accord with our intuition that there ought to be one thing, and one thing only, that really is computed by a system. But that intuition is the same as that there's one thing, and one thing only, that the word 'gift' really means.

As a child, I always used to wonder why other people bother with foreign languages. I mean, wouldn't they have to translate it into German in their heads to actually understand it, anyway? (Fodor seems to have had a similar intuition, hence, his invention of 'mentalese', a language that brains just understand.)

But of course, that's nonsense. And it's the same nonsense as saying that there must be one language that a device speaks, when it computes. There's only symbols interpreted a certain way.

Quote:
IIRC, translformations are special cases of composition and are not the same as composition in general.
OK, so can you give me any hard-and-fast criterion on when two computations are the same, and when they differ?
  #254  
Old 05-27-2019, 07:34 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,858
Quote:
Originally Posted by wolfpup View Post
... if the same mapping of inputs to outputs solved all three problem classes simultaneously, then they are all computationally equivalent by definition. Neither problem is harder than any other, or takes longer to solve, or is different in any other discernible way, because they are (computationally) all exactly the same problem. This is not, however, the kind of fortuitous coincidence one finds in real-world systems of non-trivial complexity.
I've been trying to say this all along. If you take my laptop example, I type FISH and FISH is displayed on the screen. This is all the system does. The very same computations that happen inside the laptop might be capable of being used to translate chinese, or control traffic; but this does not happen. Only the word FISH appears, because of the way the laptop is designed. The other interpretations are irrelevant, because they do not get displayed.

(Unless there is by chance a word 'FISH' in chinese which also coincidentally means FISH in english, or the the word FISH can also be a traffic control strategy that traffic controllers instantly understand. Flow Implementation in School Holidays, perhaps?)

But this does pose a problem for the 'simulation of consciousness' concept. If all the neurons in the brain are busily performing calculations that are ambiguous, we can't tell what they actually representuntil we simulate the entire brain/body system and hook up all the inputs and outputs as well. If a hypothetical simulation of a brain is connected to an artificial voicebox, and starts talking about FISH as expected, that suggests the simulation is working correctly; if it starts talking about traffic control or declensions in chinese, something's wrong.
  #255  
Old 05-27-2019, 07:44 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,858
I would note however that the human brain is very plastic and does seem to be capable of self-correction to a certain extent, so is robust enough to accommodate a wide range of failure modes. Otherwise electroshock therapy, lobotomy or taking psychosomimetic drugs would simply randomise the data, and cause the whole system to stop working.
  #256  
Old 05-27-2019, 10:47 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Quote:
Originally Posted by Half Man Half Wit View Post
So all I'm doing is trying to get a grip on what, exactly, the word 'computation' means to you. So maybe start there: what, exactly, are computations, and how are they individuated? When do I know that one computation is different from another?
Seconded.

Simple definitions and examples with agreement between parties sets the foundation for the next level up of discussion to see where there is validity and where there are problems.

For example, when asked about gravity in the simulation, begbert2 clearly defined his position. I haven't gone back to that point yet, but at least myself or anyone participating in the thread knows exactly where he stands and can respond accordingly.


We should have a clear definition of computation that should allow any of us to identify whether any specific example provided in this debate is considered a computation or not.
  #257  
Old 05-27-2019, 11:44 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Quote:
Originally Posted by wolfpup View Post
That's flat-out wrong, once again. I'm conflating nothing. CTM is not some vague "metaphysical hypothesis", it's an explanatory theory grounded in experimental evidence. For example, evidence for the syntactic-representational view of mental imagery as opposed to the spatially displayed or depictive models.
If true, then how could researchers accurately (better than chance) identify what the person was imagining just by monitoring the V1 through V3 visual processing areas and comparing to test symbol activation?

http://www.peterkokneurosci.com/My_p...CurBio2013.pdf
(V1-V3) areas
1 - Similar neural patterns for similar images, whether perception, working memory or imagery (Grated image)
2 - Image could be predicted based on neural activation better than chance


I challenge you to provide a cite of any research more recent than the year 2005 that shows that the V1-V3 areas are NOT activated during mental imagery. Scientists have learned a lot since the 70's.


Note:
From my perspective, I think it's naive to think the brain only has one way to solve problems. I think that evolution would have naturally made efficient use of existing machinery (visual working area, auditory working area, etc.) to solve some aspects of problems while also having other approaches to solve other aspects or types of problems (e.g. symbolic, logical, hard coded circuits and pretty much every other computing/calculating mechanism that it might stumble upon).
  #258  
Old 05-27-2019, 12:02 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,829
Quote:
Originally Posted by RaftPeople View Post
Seconded.

Simple definitions and examples with agreement between parties sets the foundation for the next level up of discussion to see where there is validity and where there are problems.

For example, when asked about gravity in the simulation, begbert2 clearly defined his position. I haven't gone back to that point yet, but at least myself or anyone participating in the thread knows exactly where he stands and can respond accordingly.


We should have a clear definition of computation that should allow any of us to identify whether any specific example provided in this debate is considered a computation or not.
Computation was defined by Alan Turing in the specification of his eponymous machine, which can be simply restated here, for purposes of this discussion, as a series of discrete operations on symbols, defined by a set of rules, that have the effect of deterministically transforming a set of input symbols into a set of output symbols. By extension, computation can be deemed to be performed by a black box whose internal mechanism of operation is unknown, but which is observed to perform that same deterministic mapping for all possible inputs.

Thus, a Turing machine, or an implementation of one using logic gates, which takes as input any two digits say in the range of 0 to 9 and whose output is their product is obviously performing a computation, but a program which knows nothing about arithmetic and which implements what back in my day in grade school was a "multiplication table" and generates the answer by table lookup is also doing computation. Not only is it doing computation, but according to my criterion, it is doing a computation exactly equivalent to the former, because it produces exactly the same mapping for all possible inputs.
Quote:
Originally Posted by Half Man Half Wit View Post
Could you provide the definition you're referring to here? The only definition of computational equivalence I know is the presence of a polynomially-complex reduction between two problems, but that's not something that's gonna help here. It's also at variance with your earlier stance that two computations differ if the TMs that perform them differ (i. e. if their machine tables differ).
I trust that the above clarifies what I mean by computational equivalence. I'm unaware of anything I said previously that this is at variance with.
Quote:
Originally Posted by RaftPeople View Post
If true, then how could researchers accurately (better than chance) identify what the person was imagining just by monitoring the V1 through V3 visual processing areas and comparing to test symbol activation?

http://www.peterkokneurosci.com/My_p...CurBio2013.pdf
(V1-V3) areas
1 - Similar neural patterns for similar images, whether perception, working memory or imagery (Grated image)
2 - Image could be predicted based on neural activation better than chance


I challenge you to provide a cite of any research more recent than the year 2005 that shows that the V1-V3 areas are NOT activated during mental imagery. Scientists have learned a lot since the 70's.
This is a controversial topic and it doesn't have to be proven incontrovertibly true for my assertion that it's based on empirical science to be accurate. This paper lays out the case for the computational theory of mental imagery, and that link also includes responses offering counterarguments.
  #259  
Old 05-27-2019, 12:56 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
Thus, a Turing machine, or an implementation of one using logic gates, which takes as input any two digits say in the range of 0 to 9 and whose output is their product is obviously performing a computation, but a program which knows nothing about arithmetic and which implements what back in my day in grade school was a "multiplication table" and generates the answer by table lookup is also doing computation. Not only is it doing computation, but according to my criterion, it is doing a computation exactly equivalent to the former, because it produces exactly the same mapping for all possible inputs.
OK, but then, how are my functions f and f' not distinct computations? There are two TMs, one of which takes as input a tuple of two numbers (between 0 and 3) and returns their sum, the other takes the same input, but returns the function f' as given in my table.

Indeed, the fact that both are given by different lookup tables---as explicitly provided---would, on the above definition, suffice to make them different computations. So what am I missing?
  #260  
Old 05-27-2019, 01:16 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,829
Quote:
Originally Posted by Half Man Half Wit View Post
OK, but then, how are my functions f and f' not distinct computations? There are two TMs, one of which takes as input a tuple of two numbers (between 0 and 3) and returns their sum, the other takes the same input, but returns the function f' as given in my table.

Indeed, the fact that both are given by different lookup tables---as explicitly provided---would, on the above definition, suffice to make them different computations. So what am I missing?
How on earth can you imagine that the lookup tables would be different?

The output of the box is a set of lights. If the transformation was being accomplished through a lookup table, the tables defining the f and f' mappings would be exactly the same.

There are in fact an infinite number of possible interpretations of the box's output, but this is entirely irrelevant to the nature of the computation.
  #261  
Old 05-27-2019, 01:32 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
How on earth can you imagine that the lookup tables would be different?
Because... I wrote them down... and they were different...?



Quote:
The output of the box is a set of lights. If the transformation was being accomplished through a lookup table, the tables defining the f and f' mappings would be exactly the same.
Then just humor me. Write down the computation performed by the box.
  #262  
Old 05-27-2019, 01:53 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,157
Quote:
Originally Posted by Half Man Half Wit View Post

It's indeed impossible: Moore's theorem implies that you can never obtain its exact functioning by mere experimentation on the box.
Got a link? The only Moore's Theorem I found is on topology and is not obviously relevant. Though of course you could construct an unlimited number of equivalent circuits with the same I/O behavior - which is kind of my point. However what I was getting at is that for a state machine with an unknown number of states you might not be able to construct any with just I/O behavior.
Quote:
You have a page of English text. Is a page of Hebrew text the same, or different? Of course, as such, the question isn't answerable: one could be a translation of the other. But in general, a difference in the alphabet makes a difference in the machine function.
By Greek and Hebrew letters I was referring to the fact that numerals are expressed as letters in those languages. Just like Roman numerals. Any translator will tell you that there is no 1-1 mapping from English to Hebrew and vice versa, so that point isn't relevant.
Quote:
My point is a different one, though. Say you have a page in an unknown alphabet, written in an unknown language: is there a single thing it can mean? That is, is there an effective procedure for deriving its meaning?

Of course, there can't be. You can interpret it various ways, using for instance a one-time pad key to translate it into a language you're familiar with. But which is the right meaning? There simply is no fact of the matter.
There can be a single thing it means - finding it is a different mater. If it were generated by a code, then decoding it would provide that meaning. But you'd need context to be sure. The hieroglyphics on the Rosetta Stone would never have been translated if the Greek were not available.
On the other hand the hieroglyphics Joseph Smith "translated" were found to be not the true meaning when the original was discovered and translated using our modern knowledge of hieroglyphics.
Quote:
It's exactly the same with computations. Sure, there may be an intended meaning, and likewise, an intended computation; but that can't well be a criterion for what computation a system performs (and much less for whether a system instantiates a mind). So I propose that a system computes whatever you can use it to compute; that's reasonably simple, and covers every case of actual computation that is performed. It's just that it doesn't accord with our intuition that there ought to be one thing, and one thing only, that really is computed by a system. But that intuition is the same as that there's one thing, and one thing only, that the word 'gift' really means.
And back to the difference between computation and interpretation, which you didn't really address above. Writing out "gift" is like the computation, but the interpretation depends on semantics. Look it up in the dictionary and you see the word is overloaded, being a noun and a verb and having differing meanings even as a noun.
Yes, the output of a computation can be interpreted in different ways, but so can our speech and our writing. Say a post-modernist writer uses dice to construct a short story from lists of words. I bet five readers will interpret that story in five different ways, none of them "correct."
Quote:
As a child, I always used to wonder why other people bother with foreign languages. I mean, wouldn't they have to translate it into German in their heads to actually understand it, anyway? (Fodor seems to have had a similar intuition, hence, his invention of 'mentalese', a language that brains just understand.)

But of course, that's nonsense. And it's the same nonsense as saying that there must be one language that a device speaks, when it computes. There's only symbols interpreted a certain way.
My son-in-law is German but can now think in English. I don't think his neurons have changed. Similarly a computer can "think" using ASCII characters or characters with a different coding, where "think" in this sense is symbol manipulation. But the underlying hardware remains the same.
Quote:
OK, so can you give me any hard-and-fast criterion on when two computations are the same, and when they differ?
There can be several computations for the same and equivalent functions. I don't know what you mean by two computations being the same.
Say you repeat running a program which involves dynamic memory allocation. If you look at a detailed machine language level trace of that program registers and memory locations for variables in that program may differ between runs. Is this the same or different computations? I'd have to hope that you agree that the computation, however you define it, is computing the same function in this case.
  #263  
Old 05-27-2019, 01:55 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,829
Quote:
Originally Posted by Half Man Half Wit View Post
Because... I wrote them down... and they were different...?
No, that's not what you wrote down. What you wrote down were an arbitrary two of an infinite number of possible interpretations of the switch and light patterns. Your tables are different because they embody the arbitrary semantics of the different interpretations. Your argument is manifestly circular. The tables defining the actual transformations are identical.
Quote:
Originally Posted by Half Man Half Wit View Post
Then just humor me. Write down the computation performed by the box.
It would be a table of all possible switch positions, and the light pattern that is produced by each combination. Note that this table is objective and independent of interpretation, taking into account only the computational properties of the box. Substitute symbols instead of switches and lights, and this can be represented by a classic Turing machine.
  #264  
Old 05-27-2019, 02:11 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
No, that's not what you wrote down. What you wrote down were an arbitrary two of an infinite number of possible interpretations of the switch and light patterns.
They are also perfectly sensible and distinct computations (as in, there exist different TMs realizing them) that can be performed using the box---in exactly the same way we use a calculator to perform arithmetic. Arithmetic after all involves operations on numbers, not on the LEDs of its 7-segment display, which are related to the numbers in the same way the lights on my box are.



Quote:
It would be a table of all possible switch positions, and the light pattern that is produced by each combination. Note that this table is objective and independent of interpretation, taking into account only the computational properties of the box.
This is then exactly the physical evolution of the system, and your 'computationalism' collapses onto identity physicalism.

Quote:
Substitute symbols instead of switches and lights, and this can be represented by a classic Turing machine.
Provided one interprets the symbols in the right way, of course...
  #265  
Old 05-27-2019, 02:47 PM
SamuelA is offline
Guest
 
Join Date: Feb 2017
Posts: 3,457
Quote:
Originally Posted by Half Man Half Wit View Post
Consciousness can't be downloaded into a computer, for the simple reason that computation is an act of interpretation, which itself depends on a mind doing the interpreting.

But if minds then have the capacity to interpret things (as they seem to), they have a capacity that can't be realized via computation, and thus are, on the whole, not computational entities.
Ok. I'm an engineer who's just finished a master's in computer science/Machine learning. I'm quite impatient to see philosophical arguments as all I really care about is how to use the pieces I know about to do new tasks.

So I will apologize in that I have not read all of your posts in this thread, and I have not fully analyzed what you mean.

But I've got a question for you. A practical, rubber meets the road question. You too, wolfpup.

We use an algorithm called divide and conquer on this little problem of brain emulation.

Specifically, we divide the brain to the simplest case, a synapse. We know all or nothing electrical signals come in, and all or nothing electrical signals leave.

We know the information carried in primarily in just timing. That is, if a pulse leaves, the exact time it leaves carries information to other sub-components in the system.

We study the system and determine there's a gaussian function of randomness in each real synapse - the output seems to be F1(Rules, Input, State, Noise). There is a second internal output where State_new = F2(Rules, Input, State_previous, Noise).

"Noise" we just use some mathematical function (probably gaussian but I won't be averse to using other functions if they fit better) to replace the thousands of subtle biochemical details that sum to random noise overall, allowing for a simpler (and cheaper) model and thus requiring cheaper computer hardware to run.

The rules we can deduce by building a model by studying each synapse in laboratory and living animal models. (we genetically modify the animals to use exactly the same type of synapses the human brain uses)

The inputs are immediate time things. The State is something we can determine by examining a synapse with sufficient resolution.

Anyways, a whole brain is just a combination of (trillions) of these subproblems. Each subproblem is just timed electrical pulses. Anything like "consciousness" has to be emergent behavior from higher level systems.

And, who cares how it works. We know if physical reality follows the same rules inside a brain that it does outside, and you duplicate the subproblems (you solve each subproblem), you solve the overall problem. (duplicating the behavior, including complex internal perceived behavior like consciousness, of a complex machine like a brain)

For me to care how it actually works* you need to prove that I can't subdivide the system into tiny subproblems where these philosophical problems that both you and wolfpup talk about don't matter.

*sure, once you have working, conscious brain emulations in hardware that can be paused, where you can inject and copy digital values from specific areas, and so on, scientists of the far future will surely be able to work out how it all actually works.

Last edited by SamuelA; 05-27-2019 at 02:50 PM.
  #266  
Old 05-27-2019, 03:03 PM
eschereal's Avatar
eschereal is offline
Guest
 
Join Date: Aug 2012
Location: Frogstar World B
Posts: 16,371
Thought and behavior appear to be more or less computational. I see a bear in the woods and natural think, "avoid" or in some circumstances, "kill", and act accordingly. That looks a lot like a conditional calculation based on a complex resolution of stored symbols. Similarly, figuring out how long it will take to cross this ten mile wide desert valley and whether I have the resources (water, fuel, whatever) to do it is pretty obviously computational. Animals seem to display similar capabilities.

But what is consciousness relative to that? Behavior seems to define our personalities, but is that the same thing as our consciousness? I know that my personality and behavior patterns have changed over the decades, but has my consciousness?

I seem to be the same person, in here, that I was in high school, even though many of my thoughts, responses and actions have changed. Biological computation is adaptive by nature, but consciousness appears to be contiguous and impervious to change.

Last edited by eschereal; 05-27-2019 at 03:05 PM.
  #267  
Old 05-27-2019, 04:57 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,829
Quote:
Originally Posted by Half Man Half Wit View Post
They are also perfectly sensible and distinct computations (as in, there exist different TMs realizing them) that can be performed using the box---in exactly the same way we use a calculator to perform arithmetic. Arithmetic after all involves operations on numbers, not on the LEDs of its 7-segment display, which are related to the numbers in the same way the lights on my box are.
Not only are descriptors like "perfectly sensible" and "distinct" begging the question (in the literal sense of presupposing the conclusion) , but "different TMs" is flat-out wrong because it's obvious that the same symbol manipulations are occurring in both cases, which has been exactly my point all along.

Additionally, in your calculator example, you seem to be implying that if the display is defective, or accidentally mounted upside down (and the user isn't bright enough to turn the calculator the other way) the calculator is performing a fundamentally different computation than one with a normal display. You do see how ridiculous this is, right?
Quote:
Originally Posted by Half Man Half Wit View Post
This is then exactly the physical evolution of the system, and your 'computationalism' collapses onto identity physicalism.
That isn't a "collapse"; in my view, it's a fundamental truth.

Quote:
Originally Posted by Half Man Half Wit View Post
Provided one interprets the symbols in the right way, of course...
Do you see any contradiction with your previous claim that Turing machines don't require interpretation, to wit:
Quote:
Originally Posted by Half Man Half Wit View Post
So why does a Turing machine execute a definite computation?

Simple: a Turing machine is a formally specified, abstract object; its vehicles are themselves abstract objects, like '1' and '0' (the binary digits themselves, rather than the numerals).

But that's no longer true for a physical system. A physical system doesn't manipulate '1' and '0', it manipulates physical properties (say, voltage levels) that we take to stand for or represent '1' or '0'. It's here that the ambiguity comes in.
  #268  
Old 05-28-2019, 12:47 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by Voyager View Post
Got a link? The only Moore's Theorem I found is on topology and is not obviously relevant. Though of course you could construct an unlimited number of equivalent circuits with the same I/O behavior - which is kind of my point. However what I was getting at is that for a state machine with an unknown number of states you might not be able to construct any with just I/O behavior.
It's from his Gedanken-Experiments on Sequential Machines, which introduced Moore automata, which you're no doubt familiar with. The theorem is (actually, the theorems are) that no experiment (providing inputs and observing outputs) can generally determine what state a given machine was in at the start of the experiment, and furthermore, that for every sequence of experiments on a certain machine, a different machine exists that would have provided the same outcomes.

Quote:
By Greek and Hebrew letters I was referring to the fact that numerals are expressed as letters in those languages. Just like Roman numerals. Any translator will tell you that there is no 1-1 mapping from English to Hebrew and vice versa, so that point isn't relevant.
Not on the level of individual words, but on the level of sentences, sure. But no matter: we can imagine a language with a 1:1 mapping to English such that an intelligible text in English to you will be an intelligible text saying something else in that language to a speaker of it.

Quote:
There can be a single thing it means - finding it is a different mater.
There can't be, no. Otherwise, one-time pads could be cracked (by brute force if need be).

Quote:
Yes, the output of a computation can be interpreted in different ways, but so can our speech and our writing.
That is exactly my point. A computation can be considered the same sort of thing as the meaning of a text---after all, a computation really is just a kind of description, even if perhaps a compressed one (see my above argument against simulation hypotheses).

Quote:
My son-in-law is German but can now think in English. I don't think his neurons have changed.
They surely must have---they must change with everything new that we learn, otherwise, we'd have a failure of the mental to supervene on the physical, and physicalism would be false.

Quote:
Similarly a computer can "think" using ASCII characters or characters with a different coding, where "think" in this sense is symbol manipulation.
That thinking is symbol manipulation is exactly the thesis computationalism seeks to demonstrate, and I believe is false.

Quote:
There can be several computations for the same and equivalent functions. I don't know what you mean by two computations being the same.
As I said above, I just mean that they're the same partial function (equivalently, the same TM).

Quote:
Originally Posted by SamuelA View Post
Anyways, a whole brain is just a combination of (trillions) of these subproblems. Each subproblem is just timed electrical pulses. Anything like "consciousness" has to be emergent behavior from higher level systems.

And, who cares how it works. We know if physical reality follows the same rules inside a brain that it does outside, and you duplicate the subproblems (you solve each subproblem), you solve the overall problem. (duplicating the behavior, including complex internal perceived behavior like consciousness, of a complex machine like a brain)

For me to care how it actually works* you need to prove that I can't subdivide the system into tiny subproblems where these philosophical problems that both you and wolfpup talk about don't matter.
I don't really have to 'prove' it in the general case, just exhibit a special one where it's wrong. Which is readily done: on integrated information theory (IIT), consciousness is exactly provided by that amount of information about the system you lose if you just consider its individual components. So if you agree that the view is at least possible, slicing up the system into sub-components and considering them independently will exactly lose sight of what's interesting to us.

I don't really think IIT is right, however (although it does make an interesting example against which to test one's views). So let's suppose that what you're saying is true (I believe, ultimately, it is): you can just break down the problem into manageable sub-problems, and solve those. Say you replicate the behavior of individual synapses, neurons, and the like.

The problem is, though, that while that means you can duplicate their behavior, this doesn't straightforwardly entail that you understand how consciousness is generated. While I don't hold that philosophical zombies are metaphysically possible, I do think it's a coherent idea; but then, the mere behavior may tell us nothing about conscious experience.

Make no mistake, I don't think there's any magic sauce to consciousness that can't be reduced to the physical. But I want to know how that reduction goes; and I think to find that out, we need to be honest about the problems involved, rather than hiding them behind vague notions of emergence and complexity and the like.

Quote:
Originally Posted by wolfpup View Post
Not only are descriptors like "perfectly sensible" and "distinct" begging the question (in the literal sense of presupposing the conclusion) , but "different TMs" is flat-out wrong because it's obvious that the same symbol manipulations are occurring in both cases, which has been exactly my point all along.
OK. So where you earlier claimed that it suffices to individuate computations to note that they have different input/output behavior (even for a single case), i. e.:
Quote:
Originally Posted by wolfpup View Post
A Turing machine starts with a tape containing 0110011. When it's done the tape contains 0100010. What computation did it just perform?

My answer is that it's one that transforms 0110011 into 0100010, which is objectively a computation by definition, since it is, after all, a Turing machine exhibiting the determinacy condition -- even if I don't know what the algorithm is.
Now, you claim that TMs that manifestly show different outputs given the same inputs are 'the same', and indeed, that, for example, starting with an tape showing (1, 3) and ending up showing (4) is 'obviously' the same symbol manipulation as ending up showing (5), instead.

I'm sorry, but I can't make heads or tails of that.

Quote:
Originally Posted by wolfpup View Post
Additionally, in your calculator example, you seem to be implying that if the display is defective, or accidentally mounted upside down (and the user isn't bright enough to turn the calculator the other way) the calculator is performing a fundamentally different computation than one with a normal display. You do see how ridiculous this is, right?
This is your claim:

Quote:
Originally Posted by wolfpup View Post
Computation was defined by Alan Turing in the specification of his eponymous machine, which can be simply restated here, for purposes of this discussion, as a series of discrete operations on symbols, defined by a set of rules, that have the effect of deterministically transforming a set of input symbols into a set of output symbols. By extension, computation can be deemed to be performed by a black box whose internal mechanism of operation is unknown, but which is observed to perform that same deterministic mapping for all possible inputs.

Thus, a Turing machine, or an implementation of one using logic gates, which takes as input any two digits say in the range of 0 to 9 and whose output is their product is obviously performing a computation, but a program which knows nothing about arithmetic and which implements what back in my day in grade school was a "multiplication table" and generates the answer by table lookup is also doing computation. Not only is it doing computation, but according to my criterion, it is doing a computation exactly equivalent to the former, because it produces exactly the same mapping for all possible inputs.
Consequently, once the mapping to outputs changes, the computation being performed changes. If my box shows different lights, it's your position that it would implement a different computation---only input/output behavior is relevant, and the output behavior has changed.

On my construal, that's in fact not the case. As computation is interpretational anyway, it's not at all a problem to continue to interpret, say, a 7-segment display as displaying an '8' when it displays
Code:
 __
|__|
|  |,
because the lower LED gave out. On your position, because the mapping now yields a different output---outputs after all just being LED patterns---, that's a different computation.

Quote:
Originally Posted by wolfpup View Post
That isn't a "collapse"; in my view, it's a fundamental truth.
That would be somewhat ironic, at least. Let me just quote some relevant passages from Jaegwon Kim's Philosophy of Mind (which I heartily recommend, and which is widely considered one of the best introductory texts on the matter):
Quote:
Originally Posted by Jaegwon Kim
In 1967 Hilary Putnam published a paper of modest length titled “Psychological Predicates.” This paper changed the debate in philosophy of mind in a fundamental way, by doing three remarkable things: First, it quickly brought about the decline and fall of type physicalism, in particular, the psychoneural identity theory. Second, it ushered in functionalism, which has since been a highly influential—arguably the dominant—position on the nature of mind. Third, it was instrumental in installing antireductionism as the orthodoxy on the nature of psychological properties. Psychoneural identity physicalism, which had been promoted as the only view of the mind properly informed by the best contemporary science, turned out to be unexpectedly short-lived, and by the mid-1970s most philosophers had abandoned reductionist physicalism not only as a view about psychology but as a doctrine about all special sciences, sciences other than basic physics. In a rapid shift of fortune, identity physicalism was gone in a matter of a few years, and functionalism was quickly enthroned as the “official” philosophy of the burgeoning cognitive science, a view of psychological and cognitive properties that best fit the projects and practices of the scientists.
All this stemmed from a single idea: the multiple realizability of mental properties.
So, as you see, the collapse of computationalism to identity theory would throw back the philosophy of mind to a position not held by many since the 70s, and certainly not held by the computationalists of today.

Quote:
Originally Posted by wolfpup View Post
Do you see any contradiction with your previous claim that Turing machines don't require interpretation, to wit:
No, of course not. Whatever symbols the Turing machine deals with are certainly not the inputs and outputs of my box, which are switch positions and lamp states. So the TM is perfectly definite as taking, say, 'up, up, down, up' to 'on, off, off', but it is a matter of interpretation---involving, say, English language competence---that the switch and lamp states are captured by this. A switch being up and the word 'up' are two very different things, just like a dog and the word 'dog'.

Last edited by Half Man Half Wit; 05-28-2019 at 12:49 AM.
  #269  
Old 05-28-2019, 11:48 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,829
Quote:
Originally Posted by Half Man Half Wit View Post
OK. So where you earlier claimed that it suffices to individuate computations to note that they have different input/output behavior (even for a single case), i. e.:


Now, you claim that TMs that manifestly show different outputs given the same inputs are 'the same', and indeed, that, for example, starting with an tape showing (1, 3) and ending up showing (4) is 'obviously' the same symbol manipulation as ending up showing (5), instead.

I'm sorry, but I can't make heads or tails of that.


This is your claim:


Consequently, once the mapping to outputs changes, the computation being performed changes. If my box shows different lights, it's your position that it would implement a different computation---only input/output behavior is relevant, and the output behavior has changed.

On my construal, that's in fact not the case. As computation is interpretational anyway, it's not at all a problem to continue to interpret, say, a 7-segment display as displaying an '8' when it displays
Code:
 __
|__|
|  |,
because the lower LED gave out. On your position, because the mapping now yields a different output---outputs after all just being LED patterns---, that's a different computation.
I can't make heads or tails out of how you reached that conclusion (the claim that computations producing different outputs are the same) from anything I said, either in your quotes or anywhere else. I've been very clear from the beginning that the mapping of input symbols to output symbols is what defines a computation, and I've never said anything different. Any such reading would be a misinterpretation, but I don't see anything in those quotes that could be read that way.

If you're referring to my response to your calculator digression, first of all it seemed to me you were implying that a defective LED display somehow changed the nature of the computation, but on review, I don't think you were, so I withdraw my criticism. The important point here is that I certainly am not saying that, either, and trying to pretend that I am is a particularly egregious argumentative sleight-of-hand.

To expand on that more fully, we need to keep in mind what a "symbol" is; FTR, I defined it here, and let me repeat the key part: a "symbol" is a token -- an abstract unit of information -- that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. It's a logical abstraction, not a physical thing, which takes different forms in different contexts and has corresponding physical instantiations. In a calculator, the input and output symbols are the numerical digits. They are not the segments of a LED. So in my descriptive model any given computation is the same regardless of whether one or more LED segments are defective, because the (abstract) symbols being logically output are the same. Unlike your description, I don't need an "interpreter" to make it so.

Quote:
Originally Posted by Half Man Half Wit View Post
That would be somewhat ironic, at least. Let me just quote some relevant passages from Jaegwon Kim's Philosophy of Mind (which I heartily recommend, and which is widely considered one of the best introductory texts on the matter):

So, as you see, the collapse of computationalism to identity theory would throw back the philosophy of mind to a position not held by many since the 70s, and certainly not held by the computationalists of today.
Speaking of irony, I must point out first of all how deeply ironic it is that you're trying to support your argument with a cite offering boundless praise for the work of Hilary Putnam in establishing the computational theory of mind when you just finished telling us over here that it's a worthless theory that he subsequently "dismantled"!

I think the issue here is disagreement over terms of art, and specifically what you variously refer to as "identity physicalism" and "identity theory". AIUI, Putnam rejected type-identity physicalism which holds that particular types of mental states are categorically correlated with specific brain events, and endorsed instead a sort of token-identity physicalism which implies that different species can experience similar mental states in different physical ways, which led to the important idea of multiple realizability that I mentioned very early on in this discussion. It's important not just as a foundational idea in theories of cognition, but because it offers at least the prospect that the entirety of the mind could be realized on a digital computer.

My concept of physicalism is simply that everything about the mind has a corresponding physical instantiation, including emergent properties like consciousness. There may not be a specific "where" associated with it, but it exists as a holistic quality of the physical brain. I reject Chalmers' notion that its quality either must be visible in the underlying components (which is a vague and ultimately meaningless criterion) or it cannot have a physical basis (which is an absurdity that invokes mysticism). And, obviously, I reject the idea that computation is in any way subjective and in need of interpretation, I reject the silly homunculus fallacy, and my views on that have been completely consistent throughout this discussion.
  #270  
Old 05-28-2019, 03:12 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,157
Quote:
Originally Posted by Half Man Half Wit View Post
It's from his Gedanken-Experiments on Sequential Machines, which introduced Moore automata, which you're no doubt familiar with. The theorem is (actually, the theorems are) that no experiment (providing inputs and observing outputs) can generally determine what state a given machine was in at the start of the experiment, and furthermore, that for every sequence of experiments on a certain machine, a different machine exists that would have provided the same outcomes.
Again, it is true that you can't always find the state diagram from experiments, but you often can. And the experiments find the minimal state machine - it is true that there are lots of equivalent state machines, and you can't distinguish them from the minimum one by definition.
Quote:
Not on the level of individual words, but on the level of sentences, sure. But no matter: we can imagine a language with a 1:1 mapping to English such that an intelligible text in English to you will be an intelligible text saying something else in that language to a speaker of it.
I'm no translator, but I sincerely doubt there is a 1:1 mapping on sentences, or words, or complete works. Hell, important parts of Christianity are based on improper translations from Hebrew to Greek. I expect computers to be able to do translations, better than today, but I don't expect them to do it perfectly, since people can't today.
Quote:
That is exactly my point. A computation can be considered the same sort of thing as the meaning of a text---after all, a computation really is just a kind of description, even if perhaps a compressed one (see my above argument against simulation hypotheses).
A computation is a process. Unless you say that descriptions are equivalent to processes, they are not the same, and we generally don't equate them, since how do we know that the process produces the correct thing.
Quote:
They surely must have---they must change with everything new that we learn, otherwise, we'd have a failure of the mental to supervene on the physical, and physicalism would be false.
They have changed in the sense that our neurons also change when we remember something. Are we the same people after getting a new memory? If not, physicalism is true since our personalities map to our physical structure. I don't think so, since a process run on different data and creating different data is the same process.
Quote:
That thinking is symbol manipulation is exactly the thesis computationalism seeks to demonstrate, and I believe is false.
I put thinking in quotes to not beg the question. I'm not saying symbol manipulation is thinking, just that whatever thinking is should be the same no matter what symbols are involved.
Quote:
As I said above, I just mean that they're the same partial function (equivalently, the same TM).
That doesn't really answer the question, since you could have two TMs computing the same partial function, in that TM2 could write and then erase stuff on its tape and produce the same output as TM1. Are they doing the same computation?
  #271  
Old 05-28-2019, 04:04 PM
eschereal's Avatar
eschereal is offline
Guest
 
Join Date: Aug 2012
Location: Frogstar World B
Posts: 16,371
Quote:
Originally Posted by Voyager View Post
A computation is a process. Unless you say that descriptions are equivalent to processes, they are not the same, and we generally don't equate them, since how do we know that the process produces the correct thing.
How is a description not a process? You take a thing, which may be physical, abstract or emotional, and you convert it to words. The words are then received by another party and reimagined. That sounds exactly like a process to me.
  #272  
Old 05-28-2019, 06:09 PM
SamuelA is offline
Guest
 
Join Date: Feb 2017
Posts: 3,457
Quote:
Originally Posted by Half Man Half Wit View Post
I don't really have to 'prove' it in the general case, just exhibit a special one where it's wrong. Which is readily done: on integrated information theory (IIT), consciousness is exactly provided by that amount of information about the system you lose if you just consider its individual components. So if you agree that the view is at least possible, slicing up the system into sub-components and considering them independently will exactly lose sight of what's interesting to us.

I don't really think IIT is right, however (although it does make an interesting example against which to test one's views). So let's suppose that what you're saying is true (I believe, ultimately, it is): you can just break down the problem into manageable sub-problems, and solve those. Say you replicate the behavior of individual synapses, neurons, and the like.

The problem is, though, that while that means you can duplicate their behavior, this doesn't straightforwardly entail that you understand how consciousness is generated. While I don't hold that philosophical zombies are metaphysically possible, I do think it's a coherent idea; but then, the mere behavior may tell us nothing about conscious experience.

Make no mistake, I don't think there's any magic sauce to consciousness that can't be reduced to the physical. But I want to know how that reduction goes; and I think to find that out, we need to be honest about the problems involved, rather than hiding them behind vague notions of emergence and complexity and the like.
Here's the deal. I've seen a lot of weird problem with electronics in my career. (I've worked in industry 5 years now, the Master's is a part time thing). And while I come up with theories as to the root cause, ultimately, about half the time every theory I have ends up being incorrect. What I end up having to do is set up an experiment, really - make the machine print a log at the moment of failure or set a pin high when it takes a particular code path or some other definitive result - and gradually narrow down where the problem could be.

Eventually I eliminate all the possibilities of what it can't be and I find the smoking gun.

My feeling is that with consciousness, neurocientists may need a lot more stuff than they have had access to to date. Kind of how subatomic particles couldn't really be found until particle accelerators and their high resolution collision detectors were available to show what happens.

Such as complete digital emulations of sections of human cortex, simulated environments, various machine learning algorithms that use unsupervised learning to find the underlying patterns and explore them.

So it's nice to speculate but trying to figure out consciousness now seems like trying to figure out the Linux operating system (if we didn't have source code) when all we have are incomplete assembly language dumps of the system when it's running, and we can only see a tiny fraction of the address space at any given time.

Last edited by SamuelA; 05-28-2019 at 06:10 PM.
  #273  
Old 05-29-2019, 12:59 AM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,157
Quote:
Originally Posted by eschereal View Post
How is a description not a process? You take a thing, which may be physical, abstract or emotional, and you convert it to words. The words are then received by another party and reimagined. That sounds exactly like a process to me.
Creating a description is a process, but the description itself is static and isn't a process.
Printed code is a description of the code. It isn't a process until it is compiled and executed. And of course taking the code from a file and sending it to a printer is a process also.
  #274  
Old 05-29-2019, 01:03 AM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,157
Quote:
Originally Posted by SamuelA View Post

So it's nice to speculate but trying to figure out consciousness now seems like trying to figure out the Linux operating system (if we didn't have source code) when all we have are incomplete assembly language dumps of the system when it's running, and we can only see a tiny fraction of the address space at any given time.
Not to mention we have only a partial description of the architecture of the computer on which it is running, half of which is incorrect.
I predict you are going to have fun debugging hardware. I worked on that for 37 years. You haven't lived until you go to a meeting a two a week for a year, run by a VP, on why our chip was dying mysteriously.
  #275  
Old 05-29-2019, 05:34 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
I can't make heads or tails out of how you reached that conclusion (the claim that computations producing different outputs are the same) from anything I said, either in your quotes or anywhere else.
Well, I simply don't know how else to interpret your claim that my functions f and f'---which as I've explicitly written down map the same inputs to different outputs, and which you could encode into machine tables for different TMs---are the same, in any way, sense, or form.

I mean, I fell kinda silly, but here are the functions again:

Code:
x1 |  x2   ||   f(x1, x2)
-----------------------
0  |  0    ||       0
0  |  1    ||       1
0  |  2    ||       2
0  |  3    ||       3
0  |  0    ||       0
1  |  1    ||       2
1  |  2    ||       3
1  |  3    ||       4
2  |  0    ||       2
2  |  1    ||       3
2  |  2    ||       4
2  |  3    ||       5
3  |  0    ||       3
3  |  1    ||       4
3  |  2    ||       5
3  |  3    ||       6
Code:
x1 |  x2   ||  f'(x1, x2)
-----------------------
0 |   0    ||       0
0 |   2    ||       4
0 |   1    ||       2
0 |   3    ||       6
2 |   0    ||       4
2 |   2    ||       2
2 |   1    ||       6
2 |   3    ||       1
1 |   0    ||       2
1 |   2    ||       6
1 |   1    ||       1
1 |   3    ||       5
3 |   0    ||       6
3 |   2    ||       1
3 |   1    ||       5
3 |   3    ||       3

Quote:
I've been very clear from the beginning that the mapping of input symbols to output symbols is what defines a computation, and I've never said anything different. Any such reading would be a misinterpretation, but I don't see anything in those quotes that could be read that way.
So the above functions are manifestly different mappings of input symbols to output symbols; yet, you claim them to be the same.

Quote:
To expand on that more fully, we need to keep in mind what a "symbol" is; FTR, I defined it here, and let me repeat the key part: a "symbol" is a token -- an abstract unit of information -- that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. It's a logical abstraction, not a physical thing, which takes different forms in different contexts and has corresponding physical instantiations. In a calculator, the input and output symbols are the numerical digits. They are not the segments of a LED.
But a change in the LEDs implies a change in the digits. A defect could change a 9 into a 3, for example.

Quote:
So in my descriptive model any given computation is the same regardless of whether one or more LED segments are defective, because the (abstract) symbols being logically output are the same. Unlike your description, I don't need an "interpreter" to make it so.
Without an interpreter, there are no abstract symbols being output, only physical states of the system---i. e. LED patterns.

Quote:
Speaking of irony, I must point out first of all how deeply ironic it is that you're trying to support your argument with a cite offering boundless praise for the work of Hilary Putnam in establishing the computational theory of mind when you just finished telling us over here that it's a worthless theory that he subsequently "dismantled"!
I was also praising him there (in part, for his intellectual honesty in admitting his earlier mistake), so where do you think any 'irony' lurks?

Quote:
I think the issue here is disagreement over terms of art, and specifically what you variously refer to as "identity physicalism" and "identity theory". AIUI, Putnam rejected type-identity physicalism which holds that particular types of mental states are categorically correlated with specific brain events, and endorsed instead a sort of token-identity physicalism which implies that different species can experience similar mental states in different physical ways, which led to the important idea of multiple realizability that I mentioned very early on in this discussion.
No, I don't think this is right. Putnam was very explicitly proposing functionalism, which is distinct from token-identity physicalism (which only came about somewhat later). Besides, token-identity theory doesn't actually help with multiple realizability: a pain is a token of a given type (mental state), and, on multiple realizability, is realized by tokens of distinct types; so since those latter tokens are not identical, neither can the pain-token be to either of them. (I. e. if a pain-token is identical to a neural activation-token---even if the type of neural activation is not identical with the type of pain---, then that pain-token can't be identical to a silicon chip voltage configuration-token, since the silicon chip voltage configuration-token isn't identical to the neural activation-token.)

In any case, token-identity physicalism is still a very different view from computationalist functionalism.

Quote:
Originally Posted by Voyager View Post
A computation is a process. Unless you say that descriptions are equivalent to processes, they are not the same, and we generally don't equate them, since how do we know that the process produces the correct thing.
What about a movie, then. Is that a process? I would say it's a description, or perhaps, depiction---with a written description likewise being a kind of depiction.

You might want to hold that in a movie, the frames aren't logically connected to one another, but, as in my argument above, that ceases to be true once you compress the movie. So do we run the danger of creating a conscious brain by sufficiently compressing the movie taken of a subject's brain activity?

I think that would be an absurd consequence. So I think that computations really are just descriptions, as well---highly efficient descriptions, perhaps description schemes, such that you can use one scheme with different initial data ('key frames') to produce descriptions about different systems; but still, not in any sense more real or fundamentally different than just a written description. And equally unlikely to ever give rise to a conscious mind, or a real universe; and like other descriptions, always subject to interpretation, and only intelligible to those capable of interpreting them.

Quote:
They have changed in the sense that our neurons also change when we remember something. Are we the same people after getting a new memory? If not, physicalism is true since our personalities map to our physical structure.
It's a minor point, but that mere fact doesn't suffice for physicalism to be true. On things like neutral monism and dual-aspect theories, not to mention panpsychism, you'll still have a one-to-one correspondence between physical states and states of mind, but the consciousness isn't due to the physical facts about a system.

Quote:
That doesn't really answer the question, since you could have two TMs computing the same partial function, in that TM2 could write and then erase stuff on its tape and produce the same output as TM1. Are they doing the same computation?
Sure; they'd follow a different method to implement it, but it's really only the result that counts. Again, if I want to compute the square root, then any method that ends with me knowing the square root will perfectly suffice to do that.

Quote:
Originally Posted by SamuelA View Post
My feeling is that with consciousness, neurocientists may need a lot more stuff than they have had access to to date. Kind of how subatomic particles couldn't really be found until particle accelerators and their high resolution collision detectors were available to show what happens.
The difference being, that large stuff is composed of smaller stuff is an idea that's been around for thousands of years, and while the details prove tricky, the basic picture is clear. That's not the case with consciousness: nobody even has a plausible story how consciousness could come about; we don't even know what that kind of explanation would look like. I can't think of any other problem where that's the case.

Quote:
So it's nice to speculate but trying to figure out consciousness now seems like trying to figure out the Linux operating system (if we didn't have source code) when all we have are incomplete assembly language dumps of the system when it's running, and we can only see a tiny fraction of the address space at any given time.
Even in that case, while you don't know the answer, you know what it'll look like, and how to get there. But that's exactly what we're struggling with right now regarding consciousness, so nobody really has any idea whether things like brain emulation and the like will get us anywhere nearer to figuring it out. It's still a good thing to try, of course, but we mustn't kid ourselves that it's anything but a shot in the dark.
  #276  
Old 05-29-2019, 10:23 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by Half Man Half Wit View Post
Besides, token-identity theory doesn't actually help with multiple realizability: a pain is a token of a given type (mental state), and, on multiple realizability, is realized by tokens of distinct types; so since those latter tokens are not identical, neither can the pain-token be to either of them. (I. e. if a pain-token is identical to a neural activation-token---even if the type of neural activation is not identical with the type of pain---, then that pain-token can't be identical to a silicon chip voltage configuration-token, since the silicon chip voltage configuration-token isn't identical to the neural activation-token.)
Hmm, no. Token physicalism isn't threatened by multiple realizability, if you don't require reductionism.
  #277  
Old 05-29-2019, 11:39 AM
eschereal's Avatar
eschereal is offline
Guest
 
Join Date: Aug 2012
Location: Frogstar World B
Posts: 16,371
Quote:
Originally Posted by Voyager View Post
Creating a description is a process, but the description itself is static and isn't a process.
Printed code is a description of the code. It isn't a process until it is compiled and executed. And of course taking the code from a file and sending it to a printer is a process also.
Yes, the content of description itself is static. But as some words, or bytes, it is inert data. Then it is interpreted by a second party (or perhaps by the original creator from their own notebook). That is another process.

In other words, “description” means information transferred between parties, or, in other words, dual processes. You may not know what it is (what it describes, or even that it is in fact a description) until you yourself process it.
  #278  
Old 05-29-2019, 01:13 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,829
Quote:
Originally Posted by Half Man Half Wit View Post
Well, I simply don't know how else to interpret your claim that my functions f and f'---which as I've explicitly written down map the same inputs to different outputs, and which you could encode into machine tables for different TMs---are the same, in any way, sense, or form.

I mean, I fell kinda silly, but here are the functions again:

...

So the above functions are manifestly different mappings of input symbols to output symbols; yet, you claim them to be the same.
OK, thanks, now I understand what you were referring to, as outlandish as it is. But this is the very thing you said before, and which I already refuted over in #263. At first I thought you hadn't seen it, but you did respond to it in the next post. Your response was to say that the tables you wrote down were different because the computations were different, which begs the question by presupposing the truth of what the question is supposed to be asking, and does absolutely nothing to advance your argument.

In my rebuttal of this circular argument, I said (in #263) that your tables are different because they embody the arbitrary semantics of the different interpretations. That such arbitrary interpretations are possible is not in dispute; what is in dispute is whether they all represent exactly equivalent computations. And that's not a hard question to resolve.

We resolve it by asking how many such arbitrary interpretations there can be, and we observe that it's not just two, but (by the simple expedient of arbitrary minor tweaks of what each bit is taken to mean, as you did in your f' function) we find that there are in fact an infinite number of such possible interpretations. A person might naturally gravitate to the simple binary arithmetic interpretation as the most intuitive one, but as you yourself would point out -- having contrived the f' function -- no one of these infinite number of interpretations is any more intrinsically valid than any other.

So if each interpretation is indeed a distinct computation, this amazing box is in fact performing an infinite number of computations, and it's doing all of them simultaneously! That is an amazing box indeed, and clearly an absurdity.

My position is simply that the box is performing only one computation, and as such, it can be represented by just one TM, or just one table of input-output mappings.

And to be perfectly clear, this is that table:
Code:
 S11 | S12 | S21 | S22  || L1 | L2 | L3
---------------------------------------
  0  |  0  |  0  |  0   ||  0 |  0 |  0
  0  |  1  |  0  |  0   ||  0 |  0 |  1
  1  |  0  |  0  |  0   ||  0 |  1 |  0
  1  |  1  |  0  |  0   ||  0 |  1 |  1
  0  |  0  |  0  |  1   ||  0 |  0 |  1
  0  |  1  |  0  |  1   ||  0 |  1 |  0
  1  |  0  |  0  |  1   ||  0 |  1 |  1
  1  |  1  |  0  |  1   ||  1 |  0 |  0
  0  |  0  |  1  |  0   ||  0 |  1 |  0
  0  |  1  |  1  |  0   ||  0 |  1 |  1
  1  |  0  |  1  |  0   ||  1 |  0 |  0
  1  |  1  |  1  |  0   ||  1 |  0 |  1
  0  |  0  |  1  |  1   ||  0 |  1 |  1
  0  |  1  |  1  |  1   ||  1 |  0 |  0
  1  |  0  |  1  |  1   ||  1 |  0 |  1
  1  |  1  |  1  |  1   ||  1 |  1 |  0
Quote:
Originally Posted by Half Man Half Wit View Post
But a change in the LEDs implies a change in the digits. A defect could change a 9 into a 3, for example.


Without an interpreter, there are no abstract symbols being output, only physical states of the system---i. e. LED patterns.
No, it's actually the other way around. You're the one who claimed earlier that a Turing machine requires no interpreter since its symbols are abstractions. So is my computational model of the calculator. It is you who, by requiring an interpreter, potentially changes the nature of the output symbols, and so according to your own definition (not mine!) if LED segments fail or light up incorrectly, and the wrong symbols are perceived, according to your reasoning the very nature of the computation changes, because you've made your interpreter an intrinsic part of the computational process!

Quote:
Originally Posted by Half Man Half Wit View Post
No, I don't think this is right. Putnam was very explicitly proposing functionalism, which is distinct from token-identity physicalism (which only came about somewhat later). Besides, token-identity theory doesn't actually help with multiple realizability: a pain is a token of a given type (mental state), and, on multiple realizability, is realized by tokens of distinct types; so since those latter tokens are not identical, neither can the pain-token be to either of them. (I. e. if a pain-token is identical to a neural activation-token---even if the type of neural activation is not identical with the type of pain---, then that pain-token can't be identical to a silicon chip voltage configuration-token, since the silicon chip voltage configuration-token isn't identical to the neural activation-token.)
At this point I think unless you have an answer to the reductio ad absurdum in the first part of my post, this line of argument is futile because if you want to conclude that it supports "identity physicalism", then I say, so be it. If this is a consequence of the conclusion that the nature of the computation resides entirely within the physical box, then it's a consequence we have to deal with.

Last edited by wolfpup; 05-29-2019 at 01:15 PM.
  #279  
Old 05-29-2019, 03:25 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Quote:
Originally Posted by wolfpup View Post
My position is simply that the box is performing only one computation, and as such, it can be represented by just one TM, or just one table of input-output mappings.

And to be perfectly clear, this is that table:
...
It sounds like you are saying that computation is the transformation of inputs to outputs regardless of the interpretation or meaning of any of the symbols, correct? (note: this is in line with what I have seen presented in other places)
  #280  
Old 05-29-2019, 03:30 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Assuming you agree with that, onto the next question:
Given that computations can be interpreted infinitely many ways, does consciousness arise only with some of those interpretations? Or are all interpretations conscious, including the tornado simulation?
  #281  
Old 05-29-2019, 06:12 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,157
Quote:
Originally Posted by Half Man Half Wit View Post


What about a movie, then. Is that a process? I would say it's a description, or perhaps, depiction---with a written description likewise being a kind of depiction.

You might want to hold that in a movie, the frames aren't logically connected to one another, but, as in my argument above, that ceases to be true once you compress the movie. So do we run the danger of creating a conscious brain by sufficiently compressing the movie taken of a subject's brain activity?

I think that would be an absurd consequence. So I think that computations really are just descriptions, as well---highly efficient descriptions, perhaps description schemes, such that you can use one scheme with different initial data ('key frames') to produce descriptions about different systems; but still, not in any sense more real or fundamentally different than just a written description. And equally unlikely to ever give rise to a conscious mind, or a real universe; and like other descriptions, always subject to interpretation, and only intelligible to those capable of interpreting them.
No, a movie is not a process. To get closer to the area of discussion, we have ways of making a movie of the bits flowing through data paths in an integrated circuit. That movie of the process the computer is running is not a process. I'll agree with you that this description is never going to lead to consciousness nor intelligence, no more than an animated thing will ever come to life.
  #282  
Old 05-29-2019, 06:15 PM
Voyager's Avatar
Voyager is offline
Charter Member
 
Join Date: Aug 2002
Location: Deep Space
Posts: 46,157
Quote:
Originally Posted by eschereal View Post

In other words, “description” means information transferred between parties, or, in other words, dual processes. You may not know what it is (what it describes, or even that it is in fact a description) until you yourself process it.
Why does a description have to be transferred? I described a system I build in a document. Is it not a description because no one read it? (Which is closer to reality than I like to say.)

Writing and reading the description are of course processes, so is emailing it.
  #283  
Old 05-29-2019, 06:54 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,869
Ah, that was a nice extended weekend. Hmm, the thread has run far in my abscence; I'll just make a few notes:
Quote:
Originally Posted by Half Man Half Wit View Post
I suppose I can take some solace in the fact that the both of you at least seem to realize that the straightforward 'vanilla' version of computationalism---a physical system executes a program CM which produces a mind M---isn't going to work.
Take no solace, for this is wrong. I think it's entirely possible that it's possible to create a consciousness that perceives itself the same way that we perceive ourselves by running a computer program.

In fact I believe that any physicalist model of the universe (that is, any that doesn't involve ghosts) requires that emulation of consciousness be possible. Physical reality follows regular rules and can be modeled. If the brain and everything else that creates a mind is contained in physical reality, then it's a simple fact that the physical processes that creates the mind can be reproduced, with all of its side effects (including a 'seat of consciousness'), in a sufficiently detailed and accurate simulation.

Quote:
Originally Posted by Half Man Half Wit View Post
Because that example trades on mistaking the form of the symbol, rather than interpreting its content differently. There is a syntactical difference between MOM and WOW, such that the same system, being given either one, may react differently; the point I'm trying to make is, however, related to semantic differences---see my earlier example of the word 'gift'.

For the box, your MOM/WOW example would be analogous to re-wiring the switches to the various boxes---thus, changing the way it reacts to inputs. But that's not what this is about.
I hate to have to point out something this obvious, but "WOW" is just "MOM" upside-down. By standing on the other side of the table the paper is lying on -by looking at the output from a different point of view, the output is interpreted differently.

If the notion of flipping the paper upside down is too complicated for you, consider a piece of paper with the following printed on it: 101. What does it mean? A hundred and one? Five, in binary? Two hundred fifty seven, in hexadecimal? Who knows? It's a matter of interpretation, just exactly like and completely analogous to your box.

So the paper, just like your box's output, is entirely static, but multiple interpretations are possible. The difference is only in the mind of the observer interpreting things. Same as your box example. The box itself is doing a single computation/calculation/whatever and its output is the same for a given input. There is no ambiguity in the computation/calculation/whatever that the box is doing - the same way there is no ambiguity in which areas of the WOW/MOM or 101 papers have ink on them and which ones don't. The paper is directly analogous to your calculating box - in both cases there is only ambiguity in the eyes and mind of the observer/interpreter.

And in my opinion the ambiguity in the eyes and mind of the observer/interpreter has absolutely squat to do with the behavior of the box. Including the observer's interpretations in your definition of "computation" is nonsense that I'm not playing along with, and your claims that the operation of the mind would be circular if done by a calculating machine rely on using that nonsensical definition of "calculation" and are thus also nonsense. The very structure of your example highlights your error by clearly separating the calculation from anything uncertain - and making it so that I can accurately reconstruct your argument around a piece of printed paper and then prove that the interpretation ambiguities in your argument have nothing to do with the calculation occurring inside the box.

Now, if we wanted to get away from your stupid argument and stop separating the interpretation from the calculation, we can see that interpretation is part of many calculations, but there is no ambiguity introduced by that, because the calculation process is not going to be fluctuating and changing how it interprets things midstream. That was the point of asking you if your wiring was deterministic or schroedingerian, which you confusedly interpreted as me thinking you needed to lay out the wiring precisely - the point is that because there's no ambiguity in the calculation there is also no ambiguity in the way the calculation interprets things.

As I've noted, I'm a computer programmer. It's extremely common for me to store encoded values. 0=invalid and 1=valid. 0=valid and 1=invalid. That sort of thing. Now, look at those two value mappings - they're entirely contradictory. If you were using one to examine data that was stored the other way, you'd get everything wrong. But that doesn't happen because the calculation knows which interpretation it's using. There's nowhere in the closed system of the calculation for the meanings to get lost because it's a closed system, and a specific interpretation is correct because that's the interpretation that the system happens to be using.

Which is to say that the determination of the correct interpretation isn't circular; it's arbitrary. There is a difference.
  #284  
Old 05-29-2019, 07:02 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Quote:
Originally Posted by begbert2 View Post
Which is to say that the determination of the correct interpretation isn't circular; it's arbitrary. There is a difference.
If you have a computer program that can be equally said to simulate a brain or a tornado (because the interpretation of the computation is up to the external agent), does consciousness exist even when we interpret it as simulating a tornado?
  #285  
Old 05-29-2019, 07:10 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,869
Quote:
Originally Posted by RaftPeople View Post
If you have a computer program that can be equally said to simulate a brain or a tornado (because the interpretation of the computation is up to the external agent), does consciousness exist even when we interpret it as simulating a tornado?
In a materialist system, if consciousness exists it exists within the point of view of the thing housing the consciousness. Which is to say, the consciousness has a 'seat of consciousness' and is independently aware of its existence and any surroundings its containing object gives it the senses to perceive.

If this is happening in your simulation, then it's happening. It doesn't matter if some outside observer is aware of it or not. You could interpret it as a brain, a tornado, a blizzard of 1s and 0s, and that won't effect what the contents of the simulator are aware of.

If I look at a person and fail to recognize that they're self-aware (perhaps due to their extremely convincing tornado costume), that doesn't mean they're not self-aware. The same goes for any self-aware simulations you might run across.
  #286  
Old 05-29-2019, 07:25 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Quote:
Originally Posted by begbert2 View Post
In a materialist system, if consciousness exists it exists within the point of view of the thing housing the consciousness. Which is to say, the consciousness has a 'seat of consciousness' and is independently aware of its existence and any surroundings its containing object gives it the senses to perceive.

If this is happening in your simulation, then it's happening. It doesn't matter if some outside observer is aware of it or not. You could interpret it as a brain, a tornado, a blizzard of 1s and 0s, and that won't effect what the contents of the simulator are aware of.

If I look at a person and fail to recognize that they're self-aware (perhaps due to their extremely convincing tornado costume), that doesn't mean they're not self-aware. The same goes for any self-aware simulations you might run across.
It seems like you agree with these two statements:
1 - The 1's and 0's of the system require an interpretation to decide whether it's a brain simulation or a tornado simulation
2 - The interpretation does not impact whether consciousness has arisen or not

Which leads to this question:
How do we decide which sequence of 1's and 0's can create consciousness?
Is there any value to modeling it after the brain, because it might be easier to just create a tornado simulation, or even better, randomly generated code.

If randomly generated code does not seem like a good approach, then what is it exactly about the randomly generated code that is any worse than any other program when we are trying to create consciousness?
  #287  
Old 05-30-2019, 03:47 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,818
Quote:
Originally Posted by wolfpup View Post
OK, thanks, now I understand what you were referring to, as outlandish as it is. But this is the very thing you said before, and which I already refuted over in #263. At first I thought you hadn't seen it, but you did respond to it in the next post. Your response was to say that the tables you wrote down were different because the computations were different, which begs the question by presupposing the truth of what the question is supposed to be asking, and does absolutely nothing to advance your argument.
Well, the thing is, a function like f is something that we routinely take ourselves to have computed. We say that a calculator adds numbers; this refers to computing f to the exclusion of any other computation. We don't say that a calculator takes numerals to numerals; we take a calculator that adds 3 to 4 and obtains 7 to have done fundamentally the same thing as one that takes III and IV and returns VII---both have added the same numbers, just expressed differently.

On your construal of computation, I just don't see how that claim could ever come out to be true. Or are you claiming that it never is true? That we're using some 'folk notion' of computation when we're making such claims, which on closer analysis is seen to be false? And furthermore, that the 'true' notion of computation is just basically over the symbols sans interpretation?

But then, how do we ever get to meaningful symbols? How do we get, say, from pictures on a screen to the planets in the sky? No computer ever outputs planets, yet, we seem to be doing alright simulating the solar system.

What is the process by which we take what you claim are just manipulations on symbols with arbitrary semantics to concrete physical objects like planets, or abstract objects like numbers? Is it computational? If so, then why doesn't the computer just take care of it by itself? And if it isn't computational, then, of course, minds must be capable of doing something that's not computational. So what gives? How come I can take a system and compute addition, or simulate the movements of planets, if the system itself can never do anything but shuffle meaningless symbols around, but I don't do anything the system couldn't do itself, as well?

Quote:
We resolve it by asking how many such arbitrary interpretations there can be, and we observe that it's not just two, but (by the simple expedient of arbitrary minor tweaks of what each bit is taken to mean, as you did in your f' function) we find that there are in fact an infinite number of such possible interpretations.
Not quite, though. We have 24 = 16 different input states, and 23 = 8 different output states, so the number of different functions between them is 816 ~ 2,8 * 1014, which is perhaps somewhat large-ish, but not really infinite. You're right that one might relax this somewhat, but, in my experience, people are really resistant to 'arbitrary' interpretations (most would not agree that one can take one switch being 'up' to mean 1, while another's 'up' may be 0, even though there's no intrinsic problem with that).

Quote:
So if each interpretation is indeed a distinct computation, this amazing box is in fact performing an infinite number of computations, and it's doing all of them simultaneously! That is an amazing box indeed, and clearly an absurdity.
It really isn't, though. It's a well-worn stance in philosophy, known as unrestricted pancomputationalism.

As an analogy, take the word 'dog'. It means something like a small four-legged furry domesticated animal, with various additional qualifiers to pin down the meaning more accurately. But there's nothing about 'dog' (the word) that makes it mean that. It could equally well mean cat, or bird, or rock, or any of the infinitely many things that there are in the world: the association between a symbol and its meaning is arbitrary.

The analogy of your stance would then be that that's clearly absurd, that 'dog' can't mean infinitely many things. But there's simply no reason to think so. In any given usage, it means whatever we take it to mean---in the same sense, a computer computes whatever we take it to compute. There are no infinitely many computations lurking in the shadows, anymore than there are infinitely many meanings of 'dog'. It's just an instance of convention, of interpretation. All I'm claiming is that what holds for the symbols of natural language, likewise holds for the symbols of physically instantiated computation.

Quote:
My position is simply that the box is performing only one computation, and as such, it can be represented by just one TM, or just one table of input-output mappings.

And to be perfectly clear, this is that table:
Code:
 S11 | S12 | S21 | S22  || L1 | L2 | L3
---------------------------------------
  0  |  0  |  0  |  0   ||  0 |  0 |  0
  0  |  1  |  0  |  0   ||  0 |  0 |  1
  1  |  0  |  0  |  0   ||  0 |  1 |  0
  1  |  1  |  0  |  0   ||  0 |  1 |  1
  0  |  0  |  0  |  1   ||  0 |  0 |  1
  0  |  1  |  0  |  1   ||  0 |  1 |  0
  1  |  0  |  0  |  1   ||  0 |  1 |  1
  1  |  1  |  0  |  1   ||  1 |  0 |  0
  0  |  0  |  1  |  0   ||  0 |  1 |  0
  0  |  1  |  1  |  0   ||  0 |  1 |  1
  1  |  0  |  1  |  0   ||  1 |  0 |  0
  1  |  1  |  1  |  0   ||  1 |  0 |  1
  0  |  0  |  1  |  1   ||  0 |  1 |  1
  0  |  1  |  1  |  1   ||  1 |  0 |  0
  1  |  0  |  1  |  1   ||  1 |  0 |  1
  1  |  1  |  1  |  1   ||  1 |  1 |  0
Why that table, though? Why not this one:

Code:
 S11 | S12 | S21 | S22  || L1 | L2 | L3
---------------------------------------
  0  |  0  |  0  |  0   ||  1 |  1 |  1
  0  |  1  |  0  |  0   ||  1 |  1 |  0
  1  |  0  |  0  |  0   ||  1 |  0 |  1
  1  |  1  |  0  |  0   ||  1 |  0 |  0
  0  |  0  |  0  |  1   ||  1 |  1 |  0
  0  |  1  |  0  |  1   ||  1 |  0 |  1
  1  |  0  |  0  |  1   ||  1 |  0 |  0
  1  |  1  |  0  |  1   ||  0 |  1 |  1
  0  |  0  |  1  |  0   ||  1 |  0 |  1
  0  |  1  |  1  |  0   ||  1 |  0 |  0
  1  |  0  |  1  |  0   ||  0 |  1 |  1
  1  |  1  |  1  |  0   ||  0 |  1 |  0
  0  |  0  |  1  |  1   ||  1 |  0 |  0
  0  |  1  |  1  |  1   ||  0 |  1 |  1
  1  |  0  |  1  |  1   ||  0 |  1 |  0
  1  |  1  |  1  |  1   ||  0 |  0 |  1
Why not any of the other possibilities?

Quote:
No, it's actually the other way around. You're the one who claimed earlier that a Turing machine requires no interpreter since its symbols are abstractions. So is my computational model of the calculator. It is you who, by requiring an interpreter, potentially changes the nature of the output symbols, and so according to your own definition (not mine!) if LED segments fail or light up incorrectly, and the wrong symbols are perceived, according to your reasoning the very nature of the computation changes, because you've made your interpreter an intrinsic part of the computational process!
On my interpretation, the change in symbols may change the computation, because there's a further fact of the matter (given by interpretation) regarding the question what is being computed. On your interpretation, changing the output symbols must change the computation, since the output symbols are all that matters for individuating a computation.

Quote:
At this point I think unless you have an answer to the reductio ad absurdum in the first part of my post
Reductio ad absurdum doesn't mean point out a consequence you think is uncomfortable, but rather, to point out an inconsistency. While it's maybe strange, there's nothing inconsistent about a system (potentially) implementing any computation whatsoever. Besides, I don't actually think they do: there are constraints by the structure of the system. But that's an argument that's still a ways down the road from where we are right now, I'm afraid.

Quote:
Originally Posted by Voyager View Post
No, a movie is not a process. To get closer to the area of discussion, we have ways of making a movie of the bits flowing through data paths in an integrated circuit. That movie of the process the computer is running is not a process. I'll agree with you that this description is never going to lead to consciousness nor intelligence, no more than an animated thing will ever come to life.
You neglected the meaty part of my argument, though---once I compress a movie, its reproduction becomes a process. Do therefore the things shown in the movie gain reality?

Quote:
Originally Posted by begbert2 View Post
Ah, that was a nice extended weekend. Hmm, the thread has run far in my abscence; I'll just make a few notes:
Take no solace, for this is wrong. I think it's entirely possible that it's possible to create a consciousness that perceives itself the same way that we perceive ourselves by running a computer program.
So, are you no longer claiming, then, that "All that reality "computes" is particles moving around"? Because that's diametrically opposed to there being a program such that it produces consciousness, and needs a claim (generally thought to be false, and certainly immensely problematic) that consciousness is just particles moving around, rather than, say, the functional properties of those particles.

Quote:
In fact I believe that any physicalist model of the universe (that is, any that doesn't involve ghosts) requires that emulation of consciousness be possible. Physical reality follows regular rules and can be modeled. If the brain and everything else that creates a mind is contained in physical reality, then it's a simple fact that the physical processes that creates the mind can be reproduced, with all of its side effects (including a 'seat of consciousness'), in a sufficiently detailed and accurate simulation.
There are several claims that are problematic, here. (I mean, problematic for the people who study this sort of thing, obviously; you seem to have the ability to just see what's true and what's not, so this is mainly for the benefit of those who, you know, rely on arguments and that sort of thing.)

For one, there are conceptions of physicalism on which it's not the case that emulation entails realization (say, of mental properties). That's for instance the case on the identity theory I've mentioned: if mental properties are identical to (say) neuronal properties, that provides no grounds to believe that they could be instantiated by simulation. (And again, of course, IIT forms an explicit counterexample to this claim.)

Further, the idea that a model instantiates all the properties of the thing it models is problematic. We agree (I presume) that a description doesn't actually require, or even cause, the reality of the thing described. But it's not clear where the terms 'model' and 'description' diverge---take my earlier example of just successively compressing a movie, until it basically becomes a simulation of the thing shown (this sort of thing can actually be done). So from this point of view, there is simply no reason at all to believe that a simulation of a brain would be conscious, or a simulation of a universe would itself be a universe, anymore than to believe that a description of same (even a unbelievably hugely detailed description) would instantiate the requisite properties.

So the sort of conclusion you want to draw simply doesn't follow: there are counterexamples, and no actual reason to believe your claims.

Quote:
I hate to have to point out something this obvious, but "WOW" is just "MOM" upside-down. By standing on the other side of the table the paper is lying on -by looking at the output from a different point of view, the output is interpreted differently.
The point it that if I read it as 'MOM', it has different syntactic properties from when I read it as 'WOW'. For one, if I were to just read it aloud, I would make different sounds, and my production of these sounds could be entirely described as being directly causally related to the way I take the piece of paper to be oriented.

Think about a box with a matrix of switches, say a 100 x 100 square. If I press them in a 'WOW' pattern, the box is likely to do something else than if I press them in a 'MOM' pattern. Likewise, the retinal and consequent neuronal activity of me seeing 'MOM' is different from me seeing 'WOW'.

Quote:
If the notion of flipping the paper upside down is too complicated for you, consider a piece of paper with the following printed on it: 101. What does it mean? A hundred and one? Five, in binary? Two hundred fifty seven, in hexadecimal? Who knows? It's a matter of interpretation, just exactly like and completely analogous to your box.
On this, I agree (it's my 'gift' example from earlier, or the 'dog' example I've given multiple times now). The reason is that 101 stays in every case syntactically the same while differing in its semantics. That's the core issue here.

Quote:
So the paper, just like your box's output, is entirely static, but multiple interpretations are possible. The difference is only in the mind of the observer interpreting things. Same as your box example.
This is true.

Quote:
The box itself is doing a single computation/calculation/whatever and its output is the same for a given input. There is no ambiguity in the computation/calculation/whatever that the box is doing - the same way there is no ambiguity in which areas of the WOW/MOM or 101 papers have ink on them and which ones don't. The paper is directly analogous to your calculating box - in both cases there is only ambiguity in the eyes and mind of the observer/interpreter.
This isn't. Because, again, we don't take ourselves as computing symbol patterns, we take ourselves to be computing sums, say. But it's only a sum once you fix an interpretation of the symbols.

Since you're bumping up against the same confusion, let me just repeat what I've asked wolfpup above:
Quote:
Originally Posted by Yours Truly
We say that a calculator adds numbers; this refers to computing f to the exclusion of any other computation. We don't say that a calculator takes numerals to numerals; we take a calculator that adds 3 to 4 and obtains 7 to have done fundamentally the same thing as one that takes III and IV and returns VII---both have added the same numbers, just expressed differently.

On your construal of computation, I just don't see how that claim could ever come out to be true. Or are you claiming that it never is true? That we're using some 'folk notion' of computation when we're making such claims, which on closer analysis is seen to be false? And furthermore, that the 'true' notion of computation is just basically over the symbols sans interpretation?

But then, how do we ever get to meaningful symbols? How do we get, say, from pictures on a screen to the planets in the sky? No computer ever outputs planets, yet, we seem to be doing alright simulating the solar system.

What is the process by which we take what you claim are just manipulations on symbols with arbitrary semantics to concrete physical objects like planets, or abstract objects like numbers? Is it computational? If so, then why doesn't the computer just take care of it by itself? And if it isn't computational, then, of course, minds must be capable of doing something that's not computational. So what gives? How come I can take a system and compute addition, or simulate the movements of planets, if the system itself can never do anything but shuffle meaningless symbols around, but I don't do anything the system couldn't do itself, as well?
Quote:
Originally Posted by begbert2 View Post
Including the observer's interpretations in your definition of "computation" is nonsense that I'm not playing along with, and your claims that the operation of the mind would be circular if done by a calculating machine rely on using that nonsensical definition of "calculation" and are thus also nonsense.
So how does anybody ever compute a sum? How does anybody ever compute a square root, or a rocket's orbit? Your notion of computation would mean that all a computation ever produces are blinking lights. But blinking lights for me may be something entirely different from blinking lights for you.

Quote:
That was the point of asking you if your wiring was deterministic or schroedingerian, which you confusedly interpreted as me thinking you needed to lay out the wiring precisely - the point is that because there's no ambiguity in the calculation there is also no ambiguity in the way the calculation interprets things.
That's a nice attempt at a bit of good old revisionism, but I'll just note that this:
Quote:
Originally Posted by Yours Truly
The internal wiring is wholly inconsequential; all it needs to fulfill is to make the right lights light up if the switches are flipped. There are various ways to do so, if you feel it's important, just choose any one of them.
Was something you explicitly disagreed with.

Quote:
Originally Posted by begbert2 View Post
As I've noted, I'm a computer programmer.
Yes, but no need to apologize, I'm not holding it against you.

Quote:
It's extremely common for me to store encoded values. 0=invalid and 1=valid. 0=valid and 1=invalid. That sort of thing. Now, look at those two value mappings - they're entirely contradictory. If you were using one to examine data that was stored the other way, you'd get everything wrong. But that doesn't happen because the calculation knows which interpretation it's using.
The calculation reacts, of course, ultimately to voltage values, and it reacts the way you've told it to; and that it reacts differently to different voltages is really no surprise (but then that's again the distinction between the 'WOW'/'MOM' example and the 'gift' example you seem to have trouble with). It's you who's interpreting these voltage values as 0 and 1.

---------------------------------------------------------------------------

Anyway. I'll be leaving for vacation today, so I probably won't be back to this thread for a while. But I think that, by now, we're pretty clear on what the fundamental problem is. The general stance is, against my claims, that the box I've proposed really only implements one computation, which is given by its physical evolution, and that my functions f and f' are somehow irrelevant or hallucinatory embellishments of that physical evolution.

Of course, this is hugely problematic as a foundation for computationalism as a distinct stance in the philosophy of mind, but no matter. I think there's quite another way to see the issues with this stance. Because typically, we think that we compute things like addition---like my function f. Additionally, f is a sensible computation on any formalization of computation.

So those that claim that computations can be uniquely instantiated physically, have a chance, here, to prove that claim: simply describe a machine that successfully computes f, in the same way that my box computes whatever you take it to compute. If nothing else, that will at least suffice to clarify the notion of implementation you hold to be the right one.

So that's my challenge (in particular to begbert2 and wolfpup, but everyone can play): describe a machine to instantiate f. Else, if you can't, try to explain how and why we take ourselves to compute f, if we don't have such a machine. In that case, you're stuck having to explain how we do so: either by computation---then, why can't that computation be done by the box?---or not---in which case, computationalism is false anyway.

After all, it's you who's making a claim---that minds can be computationally instantiated---so it ought to be you substantiating it.

Now, of course, I have a pretty good idea of how this will play out: you can't describe such a machine (after all, my box is exactly an example of a machine that one would consider to implement f), so you'll either waffle, or refuse to play. So what I'm really interested in is to see the sorts of justifications you come up with to not have to meet my challenge. But, then again, maybe I'll be surprised---who knows!

Last edited by Half Man Half Wit; 05-30-2019 at 03:51 AM.
  #288  
Old 05-30-2019, 11:15 AM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,869
Quote:
Originally Posted by RaftPeople View Post
It seems like you agree with these two statements:
1 - The 1's and 0's of the system require an interpretation to decide whether it's a brain simulation or a tornado simulation
2 - The interpretation does not impact whether consciousness has arisen or not
Technically speaking, if the system has consciousness it doesn't require an outside observer to make decisions about whether it's a consciousness or not, because it can do that itself! It will have its own opinion about whether it's self-aware and it probably doesn't care what you think, unless your opinion about its sentience is the only thing preventing you from incinerating it in a pot of Beezle-Nut oil. ("We are here! We are here! We are here!")

But yeah, if I am walking along and see a human wearing one of those fake cardboard tree costumes you see in elementary school plays, and I glance at the costumed human and interpret him as being a real tree, then I indeed will have failed to recognize the human as being a conscious entity. And you're correct; I'm of the opinion that my misidententification of things I observe doesn't transform them into real trees. Misinterpretations by observers do not transform the observed things.

Quote:
Originally Posted by RaftPeople View Post
Which leads to this question:
How do we decide which sequence of 1's and 0's can create consciousness?
Is there any value to modeling it after the brain, because it might be easier to just create a tornado simulation, or even better, randomly generated code.

If randomly generated code does not seem like a good approach, then what is it exactly about the randomly generated code that is any worse than any other program when we are trying to create consciousness?
This is sort of like asking whether, when you're attempting to bake a delicious cake, is it useful to model it on cakes you know about, or whether it's better just to throw a bunch of random stuff in a pot and start stirring. Yes, it's possible that tossing together the collection of knickknacks on top of your desk and stirring them will make a delicious cake, but it might not be the surest approach. (I'm actually being serious here; I'm not much of a cook and I don't know what you have on your desk. So who knows? Maybe your desk is a cake in the making.)

If *I* wanted to make a delicious cake, I'd say that a way to be sure you've created a delicious cake would be to take an existing delicious cake and copy it at the submolecular level. (As one does.) In this way I don't have to either invent or stumble onto a working framework for deliciousness; I'm copying something that already works. And as a computer programmer, copying something that already works is a way more certain way of getting what you want than figuring it out yourself.
  #289  
Old 05-30-2019, 12:13 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Quote:
Originally Posted by begbert2 View Post
If *I* wanted to make a delicious cake, I'd say that a way to be sure you've created a delicious cake would be to take an existing delicious cake and copy it at the submolecular level. (As one does.) In this way I don't have to either invent or stumble onto a working framework for deliciousness; I'm copying something that already works. And as a computer programmer, copying something that already works is a way more certain way of getting what you want than figuring it out yourself.
And how exactly do you do that when you are trying to create consciousness by using a different physical medium than the original?

That is the key point: which aspects of the brains transformations cause consciousness and how/can that be mapped into 1's and 0's?


The problem is very different from the cake example of duplicating it using the same medium and a very low level of similarity.
  #290  
Old 05-30-2019, 12:40 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,869
Quote:
Originally Posted by Half Man Half Wit View Post
So, are you no longer claiming, then, that "All that reality "computes" is particles moving around"? Because that's diametrically opposed to there being a program such that it produces consciousness, and needs a claim (generally thought to be false, and certainly immensely problematic) that consciousness is just particles moving around, rather than, say, the functional properties of those particles.
That comment was lampooning your argument, which is to say it *is* your argument. You're stating that if somebody was to look at the universe (or, say, a living human) and announce "I'm going to just think of you as a pile of particles", then the universe (or person) would cease to be anything other than a pile of particles. Or maybe the fact that there's a person out there deciding to interpret reality funny causes some sort of infinite regression somehow and causes all of reality to disappear in a puff of logic.

If your argument can do it to a calculation, it can do it to reality - there's literally nothing about your argument as presented that prevents the magic of variable interpretation from being applied to things other than calculations. Honestly it's a miracle we've made it this far without somebody making a different interpretation of something and destroying all reality.

Quote:
Originally Posted by Half Man Half Wit View Post
For one, there are conceptions of physicalism on which it's not the case that emulation entails realization (say, of mental properties). That's for instance the case on the identity theory I've mentioned: if mental properties are identical to (say) neuronal properties, that provides no grounds to believe that they could be instantiated by simulation. (And again, of course, IIT forms an explicit counterexample to this claim.)
Yep, we've already discussed IIT, I believe. That's the one where physical matter has souls, right?

Look, I get that people can declare that they refuse to believe that computers emulate consciousness, or deliciousness, or emotion. It's a faith thing. I just don't think that their baseless declarations will make a whit of difference to any simulated entities that we happen to create as they eat their delicious cakes and enjoy every bite.

Quote:
Originally Posted by Half Man Half Wit View Post
Further, the idea that a model instantiates all the properties of the thing it models is problematic. We agree (I presume) that a description doesn't actually require, or even cause, the reality of the thing described. But it's not clear where the terms 'model' and 'description' diverge---take my earlier example of just successively compressing a movie, until it basically becomes a simulation of the thing shown (this sort of thing can actually be done). So from this point of view, there is simply no reason at all to believe that a simulation of a brain would be conscious, or a simulation of a universe would itself be a universe, anymore than to believe that a description of same (even a unbelievably hugely detailed description) would instantiate the requisite properties.

So the sort of conclusion you want to draw simply doesn't follow: there are counterexamples, and no actual reason to believe your claims.
I'm super-not interested in explaining to you what a simulation is, since you're clearly having a problem with that. But I suppose I should make a token effort.

Simulations attempt to replicate behavior. Which behaviors and properties they emulate depend on what effects they're trying to replicate - a 3D renderer attempts to replicate the behavior of light but not heat, mass, or gravity. When you emulate things you get emergent behavior of the things you're emulating, like how you get shadows and reflected images as a side effect of emulating how light bounces, is blocked, and is absorbed. You don't get emergent behavior of behaviors and properties you're not emulating - the rendered image doesn't show the things tumbling to the floor.

In your film example it of course doesn't become "basically a simulation" of the thing shown - that's transparently stupid for numerous obvious reasons, firstly being that the movie doesn't even capture a full image of the thing (like from behind), and certainly doesn't capture things like mass and innards.

Stupid counterexamples don't support your case.

Quote:
Originally Posted by Half Man Half Wit View Post
The reason is that 101 stays in every case syntactically the same while differing in its semantics. That's the core issue here.
Yeah, the core issue is that your argument relies on merging the computation and the interpretation into a thing you (for some inexplicable reason) call a computation, and then thinking that by confusing the terms you can prove things that you most certainly can't.

Quote:
Originally Posted by Half Man Half Wit View Post
This isn't. Because, again, we don't take ourselves as computing symbol patterns, we take ourselves to be computing sums, say. But it's only a sum once you fix an interpretation of the symbols.
Tell you what - I'm just going to call "computation + interpretation" "computation*". Without the asterisk it means the deterministic operations that are going on inside the box, which are trucking on completely unaffected by the observation. With the asterisk it's the version where it's an interpretation of the output of the box by the observer.

By clearly distinguishing which (re)definition of the word we're talking about, we can hopefully cut down a bit on any bait and switch fallacies and sophistry.

Quote:
Originally Posted by Half Man Half Wit View Post
That's a nice attempt at a bit of good old revisionism, but I'll just note that this:

Was something you explicitly disagreed with.
Yep, that's the part you got confused about. The internal wiring is utterly critical to which computation is taking place. It determines it! Sure, there are other ways to wire it that result in the same output mapping, but there are others that don't, and there are others that do but don't have the same internal properties.

I mean, we are still talking about consciousness, right? If you talk to somebody through an intercom, or if we play back a recording of what you said later to a different person, both you and the recording produce the same output but the internal behavior was different. Contrary to stupid counterexamples the recording doesn't become sentient just because it sounds the same as you for a little while.

Quote:
Originally Posted by Half Man Half Wit View Post
The calculation reacts, of course, ultimately to voltage values, and it reacts the way you've told it to; and that it reacts differently to different voltages is really no surprise (but then that's again the distinction between the 'WOW'/'MOM' example and the 'gift' example you seem to have trouble with). It's you who's interpreting these voltage values as 0 and 1.
Nonsense - the program is explicitly recognizing that 1 (or 0) means "invalid" and reacts differently, carrying out its error handling routing. It's very explicitly the error handling routine, and it remains the error handling routine even if somebody loftily says "I refuse to recognize that the computer exists as anything other than a pile of unrelated particles, and thus refuse to recognize that the error handler, or the program, or the computer it's running on, even exist". Lofty person can say anything they like, but the program knows differently and doesn't give a crap what they think.

To put things in the terms we're using, the program is the computation, and its interpretation of its 1s and 0s is its computation* about its previous computations. The lofty person's silly interpretation is a different computation*. The lofty person's computation* doesn't change, disprove, or disintegrate-in-a-puff-of-logic the computation* carried out by the computation.

Quote:
Originally Posted by Half Man Half Wit View Post
Anyway. I'll be leaving for vacation today, so I probably won't be back to this thread for a while. But I think that, by now, we're pretty clear on what the fundamental problem is. The general stance is, against my claims, that the box I've proposed really only implements one computation, which is given by its physical evolution, and that my functions f and f' are somehow irrelevant or hallucinatory embellishments of that physical evolution.
The box only implements one computation. Computation*s are produced by the observer on their own time when they look at the box, and there could be a different one for every observer.

Quote:
Originally Posted by Half Man Half Wit View Post
So those that claim that computations can be uniquely instantiated physically, have a chance, here, to prove that claim: simply describe a machine that successfully computes f, in the same way that my box computes whatever you take it to compute. If nothing else, that will at least suffice to clarify the notion of implementation you hold to be the right one.

So that's my challenge (in particular to begbert2 and wolfpup, but everyone can play): describe a machine to instantiate f. Else, if you can't, try to explain how and why we take ourselves to compute f, if we don't have such a machine. In that case, you're stuck having to explain how we do so: either by computation---then, why can't that computation be done by the box?---or not---in which case, computationalism is false anyway.
Just to be clear, every time you say "computation" or "compute" in the above, you're talking about computation*, not computation. Which is to say you're talking about the behavior of the box plus the interpretation of that behavior by an observer.

Obviously, computation* depends on the observer, so it's impossible to create a box that does the job of the observer. That's like creating a delicious cake that makes somebody happy about its deliciousness when there's nobody around.

...unless the cake eats itself.
...unless the box observes itself.

I'm not going to bother scrolling up and rereading your specific function f and figuring out what you want f to specifically mean, but I vaguely remember it involved the observer interpreting the lights as integers. Integers, of course, can be even or odd.

So suppose your box went through it's computation to light up the lights - and then didn't stop there. It then took which lights it had lit up and then did another computation on them to determine if the number was odd or even, and stored that result in an internal log (with 1=even and 0=odd, because I'm an ass). This computation would of course rely on the box interpreting its own output in a particular way - and the way it uses to do that interpretation is f. I believe that this interpretation of the outputs meets your cockeyed definition of a computation*, so this is a box that produces/employs computation* f.

Now, is this going to stop somebody else from wandering by and interpreting the lights differently? Nope! But it doesn't really matter. Them doing so isn't going to cause the box to disappear in a puff of logic.


Oh, and have a good vacation! I like vacations. Vacations are awesome. So enjoy your vacation!
  #291  
Old 05-30-2019, 12:59 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,869
Quote:
Originally Posted by RaftPeople View Post
And how exactly do you do that when you are trying to create consciousness by using a different physical medium than the original?

That is the key point: which aspects of the brains transformations cause consciousness and how/can that be mapped into 1's and 0's?
When a person dies their cognition stops, right? That means that cognition isn't an inherent property of the mass of the brain simply existing and being in that configuration, because a person can die and leave the brain intact. This means that cognition, like life, is something somehing does, not something it is.

Which means the medium doesn't matter. Only the behavior matters - and simulations can perfectly emulate behavior.

As for which aspects of the brain's behaviors cause consciousness, the whole point of emulating the whole freaking brain at the submolecular level is so we don't have to know that. Just as one speaks of throwing out the baby with the bathwater, whole-brain emulation is recreating the entire bath just to make sure you get the baby. If we actually knew which operations created consciousness we could do it way easier and more efficiently; replicating the whole brain's behavior is the brute force approach.


Oh, and do you wanna hear my silly theory of the day? Execution loops cause consciousness. All execution loops causes consciousness. Every computer program you've ever run creates and destroys one, dozens, or millions of threads of consciousness. It's a slaughter!

Of course, most of these threads of consciousness aren't given access to much in the way of memory, inputs, internal state - memories, senses, or thoughts. Terminating such a consciousness would be terminating something less than a bug, and the termination of course wouldn't inspire anything analogous to pain either.

Of course there are people who think there are moral implications to terminating consciousness. But I had a slice of ham with dinner last night, so I clearly have no problems with consciousnesses being created and subsequently terminated entirely for my own personal benefit. So running and terminating computer programs is no problem for me!

Quote:
Originally Posted by RaftPeople View Post
The problem is very different from the cake example of duplicating it using the same medium and a very low level of similarity.
Yeah - simulating is probably easier than physically assembling a real thing at the submolecular level.
  #292  
Old 05-30-2019, 05:33 PM
neutro is offline
Guest
 
Join Date: Apr 2019
Location: Redmond, WA
Posts: 109
Quote:
Originally Posted by begbert2 View Post
Oh, and do you wanna hear my silly theory of the day? Execution loops cause consciousness. All execution loops causes consciousness. Every computer program you've ever run creates and destroys one, dozens, or millions of threads of consciousness. It's a slaughter!
Isn't this what Chalmers theories boil down to in the end?

I still haven't seen a convincing argument that uploading a brain is impossible. I also haven't seen anything convincing me it will happen anytime soon as we really seem to suck at this stuff so far.
  #293  
Old 05-30-2019, 05:53 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,869
Quote:
Originally Posted by neutro View Post
Isn't this what Chalmers theories boil down to in the end?
I haven't read David Chalmers, but per his wiki page he seems to be a proponent of philosophical zombies, which would seem to be antithetical to the idea. It's not even slightly conceivable that a zombie could function without continuously processing and reacting to its input, which would require an execution loop by definition. If literally every execution loop causes consciousness, then by definition there can be no such thing as a philosophical zombie.

Also, as a side note, the wiki page claims that his argument for the possibility of philosophical zombies is that because they're conceivable they must be logically possible, which may be the stupidest thing I've ever read. I do hope he's being misrepresented because that makes him sound like an idiot.

Quote:
Originally Posted by neutro View Post
I still haven't seen a convincing argument that uploading a brain is impossible. I also haven't seen anything convincing me it will happen anytime soon as we really seem to suck at this stuff so far.
Most of the counterarguments seem to boil down to beliefs that cognition is literally magic, and thus isn't a behavior that can be replicated.
  #294  
Old 05-30-2019, 08:18 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Quote:
Originally Posted by begbert2 View Post
You're stating that if somebody was to look at the universe (or, say, a living human) and announce "I'm going to just think of you as a pile of particles", then the universe (or person) would cease to be anything other than a pile of particles.
That's not his argument at all.

His argument is that there is no clear link that can be made between computation and consciousness because computation is the manipulation of symbols independent of their interpretation or meaning, and there can be multiple interpretations for any computation.

If there is no clear link, then you can't state (or prove) that computation alone is sufficient to create consciousness.


He shows this with a box example that requires an interpreter to determine which function is being performed and extends it to the human brain by pointing out that the thing doing the interpretation for consciousness must be external to the computation itself (per the definition of computation), which means it's not really just the computation that is responsible for consciousness (it also requires something to do some interpretation).
  #295  
Old 05-30-2019, 11:05 PM
eschereal's Avatar
eschereal is offline
Guest
 
Join Date: Aug 2012
Location: Frogstar World B
Posts: 16,371
Quote:
Originally Posted by begbert2 View Post
In a materialist system, if consciousness exists it exists within the point of view of the thing housing the consciousness. Which is to say, the consciousness has a 'seat of consciousness' and is independently aware of its existence and any surroundings its containing object gives it the senses to perceive.
In terms of the materialist position, though, would localization be a requirement? I mean, that is the nature of consciousness with which we are intimately familiar, but, if there is the possibility of machine consciousness, why should we assume that it would assume a familiar form? Given that computing machines have rather different, perhaps more efficient methods of communication, one might guess that self-awareness could emerge, assuming emergence is how it develops, as a more diffuse, less singular property.

What it it were to emerge but we were not equipped to recognize it? And if machine consciousness will only compatible with a non-localized presence, would that ultimately make it impossible for us to perform transfers of our own consciousness to and from storage due to compatibility issues?
  #296  
Old 05-31-2019, 10:53 AM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,869
Quote:
Originally Posted by RaftPeople View Post
That's not his argument at all.

His argument is that there is no clear link that can be made between computation and consciousness because computation is the manipulation of symbols independent of their interpretation or meaning, and there can be multiple interpretations for any computation.

If there is no clear link, then you can't state (or prove) that computation alone is sufficient to create consciousness.


He shows this with a box example that requires an interpreter to determine which function is being performed and extends it to the human brain by pointing out that the thing doing the interpretation for consciousness must be external to the computation itself (per the definition of computation), which means it's not really just the computation that is responsible for consciousness (it also requires something to do some interpretation).
His argument is indeed that due to the fact that by separating the computation device from the interpreter you can introduce ambiguity. From this ambiguity (introduced by a separate observer) you can somehow produce an internal contradiction somewhere, that disproves something - according to his argument.

In actual fact, of course, when you have a computation observing itself (which is what we're talking about with brains), you do certainly have parts of the system interpreting the output of other parts. This is present in literally every computer program ever. The part where his argument collapses into rank, obvious stupidity is where he thinks that this need to establish interpretations causes a form of circularity that's any kind of logical problem.

I mean, yes, from one point of view it's 'circular' - the error checker expects 1 to mean 'error' because that's what the function outputs, and the function outputs 1s for errors because it expects error handlers to look for 1s and treat them as errors. However this form of circularity isn't "turtles all the way down" circularity, it's "somebody has to pick something, we don't care who does, and everyone else will go along with it" 'circularity'.

It's actually pretty analogous to when people get together to hang out and start playing the "What do you want to do?" "I don't know, what do you want to do?" "I don't know, what do you want to do?" game. Eventually somebody picks something and everyone goes forward, and if everyone really doesn't care, the selection will be arbitrary - and then afterwards will be consistent for all the persons involved. The selection of interpretations to use about symbols in a system is arbitrary, but will be consistent thereafter. This allows meaning to be transferred.

If his argument held water, then the universe would implode or something when people played the "I don't know, what do you want to do" game.

Or when two people chose to speak to one another in english as opposed to spanish.

Or when a dog owner trained its dog that the spoken phrase "sit" is an instruction for the dog to sit.

Or when he ran any computer program ever, including whichever one he posts his comments through.

Honestly, the most annoying part about his argument is the way the disproofs of it are ubiquitous and yet he hews to it so strongly. Well, that and how he seems to misunderstand half of what I say.
  #297  
Old 05-31-2019, 11:11 AM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,869
Quote:
Originally Posted by eschereal View Post
In terms of the materialist position, though, would localization be a requirement? I mean, that is the nature of consciousness with which we are intimately familiar, but, if there is the possibility of machine consciousness, why should we assume that it would assume a familiar form? Given that computing machines have rather different, perhaps more efficient methods of communication, one might guess that self-awareness could emerge, assuming emergence is how it develops, as a more diffuse, less singular property.
I don't see why localization would be a requirement - though all the disparate parts are going to have to be in near-continuous, real-time communication with one another to keep doing their jobs as parts of the consciousness, creators of thoughts, holders of emotions, triggerers of reactions, and so on.

Quote:
Originally Posted by eschereal View Post
What it it were to emerge but we were not equipped to recognize it? And if machine consciousness will only compatible with a non-localized presence, would that ultimately make it impossible for us to perform transfers of our own consciousness to and from storage due to compatibility issues?
Well, my silly theory of the (yester)day was that consciousness was emerging all over the place and we're not recognizing it, so yeah. Also it seems highly probably that after true machine consciousness is demonstrated there will be a cadre of theists insisting they're all philosophical zombies because theism souls magic special. So yeah - identifying these puppies and convincing people that they shouldn't be murdered for fun ("I have to stop playing Halo?") is going to be a challenge.

However, I don't think it follows that a consciousness could only be created as a distributed entity - in all cases you could theoretically take all the disparate processors and memory stores and put them all in one room and call it non-distributed. (Unless the notion is that the hardware and memory requirements are just to big to put in one place, which I find unlikely. We have some pretty big server farms.)

As for compatibility issues, if all our machine consciousnesses have come into existence via random emergence it's less compatibility issues and more that we still don't know how to make a machine consciousness intentionally. Uploading minds definitely requires us to know how to do it on purpose.

Beyond that, though, I find it difficult to accept that there might be aspects to the human experience that are impossible to closely replicate in a machine mind once we've figured out to make machine minds. Because that's all these things are, of course - close approximations. They're copies, not relocations of the original, and it's not actually necessary for them to function the same 'under the hood' as humans do (though that's the most brute-force way to do it). If you created a 'The Sims' character that had access to the full suite of memories, knowledge, opinions, and tendencies, and was also self aware and believed themself to be me, that's basically what you're going to get out of brain uploading. The Sim would see a whole new simulated world (full of deadly ladderless swimming pools), but they would remember being me and believe they were me.
  #298  
Old 05-31-2019, 12:55 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,688
Quote:
Originally Posted by begbert2 View Post
His argument is indeed that due to the fact that by separating the computation device from the interpreter you can introduce ambiguity. From this ambiguity (introduced by a separate observer) you can somehow produce an internal contradiction somewhere, that disproves something - according to his argument.
He didn't separate the computation from the interpreter, it's the very basis of the definition, even wolfpup's favorite guy Fodor says the same thing, computation is symbol manipulation independent of the interpretation or semantic properties of the symbols involved.

In other words, if you have a system that has symbols, (for example 1's and 0's), and some machinery that performs a sequence
of operations on them (like a turing machine), that is computation.

From the perspective of the academics that spend their time working on these ideas, "computation" is just that symbol manipulation process without any regard to what specifically us humans might consider to be the meaning.

Here's an example:
Input: (1,0)

Steps to process the input:
If input=(0,0) then output=(1,0)
If input=(0,1) then output=(0,0)
If input=(1,0) then output=(0,0)
If input=(1,1) then output=(1,1)

Output: (0,0)

That is an example of a computation. Nowhere in that example did I explain why I wrote that computation, what the symbols represent, what is the meaning (to humans) of the computation and how it relates to anything. It's just pure computation, devoid of interpretation or meaning (like computation is by definition).



The problem that HMHW is pointing out, is that when someone says "ya, sure, we can easily create consciousness on a computer, all we need to do is do the RIGHT computations."

If your computer is based on symbols of 1's and 0's and has some memory and a processor, then that sentence is effectively saying "there are certain sequences of 1's and 0's in the machines memory or over time that create consciousness."

Which results in the pretty obvious responses:
1 - Which sequences?

2 - How to do figure out which sets of 1's and 0's cause consciousness and which ones don't?

3 - Given that any sequence of 1's and 0's could be interpreted in many different ways, are they all conscious or only the sets of 1's and 0's that we are currently looking at and stating to ourselves "I interpret this sequence as a brain simulation and not the simulation of a tornado"
  #299  
Old 05-31-2019, 01:31 PM
begbert2 is offline
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 12,869
Quote:
Originally Posted by RaftPeople View Post
He didn't separate the computation from the interpreter, it's the very basis of the definition, even wolfpup's favorite guy Fodor says the same thing, computation is symbol manipulation independent of the interpretation or semantic properties of the symbols involved.

In other words, if you have a system that has symbols, (for example 1's and 0's), and some machinery that performs a sequence
of operations on them (like a turing machine), that is computation.

From the perspective of the academics that spend their time working on these ideas, "computation" is just that symbol manipulation process without any regard to what specifically us humans might consider to be the meaning.
From the standpoint of a dude who actually works with these "calculations", you don't do a calculation for no reason. They're not abstract works of art; they're implemented with a purpose. And yes indeedy, that purpose means that there is an intent to their output. An intended interpretation, if you will.

This is especially true when you talk about a calculation that's part of a system that actually is purported to do something, like a brain or an artificial intelligence.

Sounds to me that this argument is sort of like saying "You can't actually be sure that cars have wheels on them, because the wheels are removable as demonstrated by cars up on blocks in people's yards. Therefore when you look at the cars driving down the highway there is no solid reason to believe that wheels are present."

Quote:
Originally Posted by RaftPeople View Post
The problem that HMHW is pointing out, is that when someone says "ya, sure, we can easily create consciousness on a computer, all we need to do is do the RIGHT computations."

If your computer is based on symbols of 1's and 0's and has some memory and a processor, then that sentence is effectively saying "there are certain sequences of 1's and 0's in the machines memory or over time that create consciousness."

Which results in the pretty obvious responses:
1 - Which sequences?

2 - How to do figure out which sets of 1's and 0's cause consciousness and which ones don't?
I feel it should be pointed out that "We don't know how to do it" is not a proof of "It's impossible to do".

And, as I've obliquely hinted at once or twice, the whole reason to brute force the problem by emulating the whole frikking brain is because if we do that we don't have to figure out how it works. Physical brains work; if you exactly duplicate their functionality with no errors or omissions then your copy will work too. The only reason your copy could possibly fail to work is if you failed to perfectly replicate some part of the functionality. In the discussion at hand that claimed missing part appears to be a theorized magical soul imparted by matter itself. For some odd reason I'm not buying that.

Quote:
Originally Posted by RaftPeople View Post
3 - Given that any sequence of 1's and 0's could be interpreted in many different ways, are they all conscious or only the sets of 1's and 0's that we are currently looking at and stating to ourselves "I interpret this sequence as a brain simulation and not the simulation of a tornado"
The thing to remember is that if one actually did have a simulated consciousness, then it's going to be doing the interpretation of its own internal data the way it wants to. It doesn't matter in the slightest if persons on the outside are unable to figure out how to read the data and follow what the consciousness is thinking, or even if they're unable to determine that consciousness is going on, because the consciousness itself is carrying on and doesn't care about outside opinions.

This is exactly equivalent to how if you were to get a printout of your computer's memory while it was running your browser you'd be hard pressed to deduce that it even was running a browser. And yet the browser runs just fine, because while all the 1s and 0s are meaningless to you, they're not meaningless to it - the program code is meaningful to the processor, and the stack and stored data are meaningful to the program code. The fact it's meaningless to you doesn't matter.

The only things that have to recognize the workings of a mind as being workings of a mind are the other workings within the same mind. Everyone else's opinions and interpretations are utterly irrelevant.
  #300  
Old 05-31-2019, 01:51 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,829
Quote:
Originally Posted by RaftPeople View Post
If your computer is based on symbols of 1's and 0's and has some memory and a processor, then that sentence is effectively saying "there are certain sequences of 1's and 0's in the machines memory or over time that create consciousness."

Which results in the pretty obvious responses:
1 - Which sequences?

2 - How to do figure out which sets of 1's and 0's cause consciousness and which ones don't?

3 - Given that any sequence of 1's and 0's could be interpreted in many different ways, are they all conscious or only the sets of 1's and 0's that we are currently looking at and stating to ourselves "I interpret this sequence as a brain simulation and not the simulation of a tornado"
I have some equivalent questions.

1. Which sequences of 1s and 0s caused Watson to answer Jeopardy questions better than Ken Jennings or Brad Rutter?

2. How to you figure out which sets of 1s and 0s produced the best answers?

3. Given that any sequence of 1s and 0s could be interpreted in many different ways, are they all striving to produce really good Jeopardy answers, or only the sets of 1s and 0s that we are currently looking at and stating to ourselves "I interpret this sequence as a very good question-answerer and not the simulation of a tornado"

The thing is, you would never be able to locate such a sequence of 1s and 0s in the Watson hardware, and neither would anyone else. This is in part because of how massively distributed and complex it is, physically running on 2,880 POWER7 processor threads and 16 terabytes of RAM, and logically composed of a dozen or so major logical functions each comprised of thousands of distinct software components. It's also in part because no one of those software components, nor any distinctly identifiable hardware component, is the "location" of this skill. It's the synergistic result of all of them working together, sometimes in sequence, sometimes in massive parallelism. And this is still a very simple system compared to the brain.

The moral of the story is that qualitative changes arise from computational complexity, otherwise known as emergent properties, and those properties are neither necessarily localized nor necessarily identifiable in the lower-level components -- they may exist only in the aggregate of the states and connections of the integrated system.
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 09:32 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright © 2018 STM Reader, LLC.

 
Copyright © 2017