Reply
 
Thread Tools Display Modes
  #51  
Old 05-16-2019, 04:28 PM
begbert2 is online now
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,120
Quote:
Originally Posted by Half Man Half Wit View Post
You have the whole thing backwards. It is, ultimately, only the ability to interpret symbolic vessels that makes it possible to use any physical system to compute, or simulate, anything. So when you say that a system can be simulated, you're already appealing to mental capacities.

Let's leave out the middleman of computation, and talk about imagination instead. I can imagine robots, unicorns, trees, and even minds. Physical systems following physical rules---no problem there. By your argument, for a sufficiently powerful imagination, it should be possible to imagine an actual mind into existence.

So, well, imagination gives rise to minds! That's that, then. Except of course nobody's going to buy that: after all, I have just used a transparently mental capacity to explain the mental. That's of course a no-go; but that's exactly what computationalism does. That's the point of my above example that's being so studiously ignored.
(I probably shouldn't tell him what it feels like as a writer to have your characters refuse to go along with the plot you've planned out for them.)

It's patently obvious that that which happens in the imagination stays in the imagination. Similarly, your simulated person will have to have a simulated environment to run around in, or it will have nothing to interact with and probably go insane. I recommend simulating Vegas. What happens in simulated Vegas will stay in simulated Vegas - but that's real enough for the simulated Vegans within it.
  #52  
Old 05-16-2019, 04:30 PM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
You still haven't described why a human brain(which is a computer) can reflect upon itself, whereas a non-biological computer cannot.
Even if it turns out that electronic computers cannot support subjectivity (which is an unsupported assertion on your part) it should be possible to construct biological computers which can do it. But I doubt very much that will be necessary.

Please note as well that I do not actually think that mind uploading is a desirable thing- it could lead to a reduction in mental diversity if we could make copies of human minds, however imperfect.
  #53  
Old 05-16-2019, 04:30 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,956
Quote:
Originally Posted by Half Man Half Wit View Post
Well, I gave an argument demonstrating that computation is subjective, and hence, only fixed by interpreting a certain system as computing a certain function. If whatever does this interpreting is itself computational, then its computation needs another interpretive agency to be fixed, and so on, in an infinite regress; hence, whatever fixes computation can't itself be computational.

And there's no need for souls, or anything like that; anything non-material or non-physical. Computation is really concerned with structural properties: we can simulate something because we can instantiate the right sort of structural relationships within a computer. But relations imply something to bear them, something that actually stands in these relations; but that doesn't carry over to the simulation. After all, that's what makes simulations so useful: if they replicated every property of the thing simulated, they'd just be copies. A simulated tree and a tree aren't the same thing, and neither is a simulated mind and a mind.
I'm wondering if you're familiar with the computational theory of mind and the fact that it's currently considered to be a major foundation of modern cognitive science, although it has its detractors. If not, you might find the link interesting reading, or if you are, perhaps you can elaborate on your apparent view that it isn't possible.

Or perhaps I'm misunderstanding what you mean by "computational", but it's fairly well defined in theories of cognition. In the briefest possible nutshell, CTM proposes that many or most (though not necessarily all) of our cognitive processes are computational in the sense that they are syntactic operations on symbolic mental representations. As such, these processes would have the important property of multiple realizability -- the logical operations could be just as well realized on a digital computer.
  #54  
Old 05-16-2019, 04:32 PM
begbert2 is online now
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,120
Quote:
Originally Posted by eburacum45 View Post
You still haven't described why a human brain(which is a computer) can reflect upon itself, whereas a non-biological computer cannot.
Non-biological computers "reflect on themselves" all the time. It just doesn't seem interesting since their self-diagnostic and operating systems are pretty simplistic, straightforward, and not prone to flights of fancy because they're not designed that way.

(And designing them some other way would probably be pretty complicated.)
  #55  
Old 05-16-2019, 04:36 PM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
By your argument, for a sufficiently powerful imagination, it should be possible to imagine an actual mind into existence.
Why do you think this is impossible? Human minds have many subpersonalities; it seems entirely possible to imagine a fully-rounded alternate personality that shares your head but is quite distinct. They call it Dissociative Identity Disorder nowadays.
https://en.wikipedia.org/wiki/Dissoc...ntity_disorder
  #56  
Old 05-16-2019, 04:41 PM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
Originally Posted by begbert2 View Post
Non-biological computers "reflect on themselves" all the time. It just doesn't seem interesting since their self-diagnostic and operating systems are pretty simplistic, straightforward, and not prone to flights of fancy because they're not designed that way.
(And designing them some other way would probably be pretty complicated.)
That's why I don't expect that artificially sentient computers will become practical for a very long time. In fact they might not ever be built, since there are probably much more useful systems that will be built first.
  #57  
Old 05-16-2019, 04:44 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Ok, so is my post above where I gave an example of how computation is subject to interpretation, how there's no sense to claiming 'system x computes function y' in an objective sense, just invisible to everybody?
  #58  
Old 05-16-2019, 04:49 PM
begbert2 is online now
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,120
Quote:
Originally Posted by Half Man Half Wit View Post
Ok, so is my post above where I gave an example of how computation is subject to interpretation, how there's no sense to claiming 'system x computes function y' in an objective sense, just invisible to everybody?
I read it. It made little to no sense. I read it again. It continued to make little to no sense. I read your post responding to me which seemed to be trying to explain it again, and it still made little sense - though I responded to what vague sort of sense I seemed to detect within it.

Why don't you pretend I'm stupid and restate your notion of "computational" in really simple, clear, and straightforward terms? We can worry about how it relates to the brain later, just get clarity on your definition of the term first.
  #59  
Old 05-16-2019, 04:52 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Quote:
Originally Posted by wolfpup View Post
I'm wondering if you're familiar with the computational theory of mind and the fact that it's currently considered to be a major foundation of modern cognitive science, although it has its detractors. If not, you might find the link interesting reading, or if you are, perhaps you can elaborate on your apparent view that it isn't possible.



Or perhaps I'm misunderstanding what you mean by "computational", but it's fairly well defined in theories of cognition. In the briefest possible nutshell, CTM proposes that many or most (though not necessarily all) of our cognitive processes are computational in the sense that they are syntactic operations on symbolic mental representations. As such, these processes would have the important property of multiple realizability -- the logical operations could be just as well realized on a digital computer.
The CTM is one of those rare ideas that were both founded an dismantled by the same person (Hilary Putnam). Both were visionary acts, it's just that the rest of the world is a bit slower to catch up with the second one.

Last edited by Half Man Half Wit; 05-16-2019 at 04:53 PM.
  #60  
Old 05-16-2019, 04:52 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,956
Quote:
Originally Posted by begbert2 View Post
What happens in simulated Vegas will stay in simulated Vegas - but that's real enough for the simulated Vegans within it.
What is it about the simulated residents of Vegas that makes them so averse to eating simulated animal products?
  #61  
Old 05-16-2019, 04:55 PM
begbert2 is online now
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,120
Quote:
Originally Posted by wolfpup View Post
What is it about the simulated residents of Vegas that makes them so averse to eating simulated animal products?
It's because they don't have simulated animal products in the simulated vicinity of the simulated star Vega, from when they hail. It's a simulated unfamiliarity thing.

ETA: It's perhaps relevant to note that the unfamiliarity is only simulated in the sense that it's real unfamiliarity felt by simulated beings. Simulated things can have real properties - a picture of a red barn is really red; it's not some kind of inferior simulated red.

Last edited by begbert2; 05-16-2019 at 04:59 PM.
  #62  
Old 05-16-2019, 05:06 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,956
Quote:
Originally Posted by Half Man Half Wit View Post
The CTM is one of those rare ideas that were both founded an dismantled by the same person (Hilary Putnam). Both were visionary acts, it's just that the rest of the world is a bit slower to catch up with the second one.
I'm sorry, but that's just not correct. Yes, Putnam did a complete about-face on the question of functionalism as the basis of CTM, but that was a long time ago, and since then many cognitive science researchers (the late Jerry Fodor among the more prominent) have made great strides in establishing CTM as a foundational basis for understanding cognition. To be clear, Fodor never thought it was a complete explanation for all cognitive phenomena -- and conflicting evidence persists about their computational basis -- but he correctly thought it would become an important one.

ETA: And Putnam wasn't really a founder of CTM, he was just a very influential early proponent. Many others carried it forward, then and now.

Last edited by wolfpup; 05-16-2019 at 05:09 PM.
  #63  
Old 05-16-2019, 05:20 PM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
Originally Posted by Half Man Half Wit View Post
The CTM is one of those rare ideas that were both founded an dismantled by the same person (Hilary Putnam). Both were visionary acts, it's just that the rest of the world is a bit slower to catch up with the second one.
Yeah, Putnam has been refuted by Chalmers, and Chalmers has been refuted by Dennett, and so on; these concepts are still in their infancy, so don't declare the computational theory dead yet.
  #64  
Old 05-16-2019, 05:42 PM
Chronos's Avatar
Chronos is offline
Charter Member
Moderator
 
Join Date: Jan 2000
Location: The Land of Cleves
Posts: 84,754
Quote:
Quoth Half Man Half Wit:

By your argument, for a sufficiently powerful imagination, it should be possible to imagine an actual mind into existence.
Well, yes, of course. But it would take a more powerful mind than the one simulated to contain that imagination. Just like my phone, a relatively powerful computer, can "imagine into existence" an HP48 calculator, a much simpler computer, by running an emulator for it.
  #65  
Old 05-16-2019, 05:56 PM
begbert2 is online now
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,120
Quote:
Originally Posted by Chronos View Post
Well, yes, of course. But it would take a more powerful mind than the one simulated to contain that imagination. Just like my phone, a relatively powerful computer, can "imagine into existence" an HP48 calculator, a much simpler computer, by running an emulator for it.
And modern PCs can run CCS64, which emulates an old Commodore 64 computer to extremely high precision - a precision which is achieved in part because it includes emulation of some of the inner workings of the physical chips. The money quote is "99.9% VIC 6566/6567/6569. All imaginable graphics modes and effect should work. The emulation of VIC is pixel exact and considers all strange effects, both known and unknown, as it emulates the inner workings of the VIC chip." Bolding mine - if you emulate a physical system closely enough you get all side effects of it for free, even if you don't know what they are or how they work.

If the human brain is a physical system, then it can in theory be emulated well enough to produce all its behaviors and effects, including consciousness and the mind. Which quite obviously means that the human mind is simulatable, under the physicalist model.
  #66  
Old 05-16-2019, 07:28 PM
Kent Clark's Avatar
Kent Clark is offline
Charter Member
 
Join Date: Apr 1999
Posts: 26,548
I believe we have some kind of life force (call it a soul if you want.)

Whether that force is mortal or immortal, we can't download it. Maybe some day, in some future, we'll figure out how to do it, but I'm not optimistic about that.

Last edited by Kent Clark; 05-16-2019 at 07:29 PM.
  #67  
Old 05-16-2019, 09:35 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by Half Man Half Wit View Post
Ok, so is my post above where I gave an example of how computation is subject to interpretation, how there's no sense to claiming 'system x computes function y' in an objective sense, just invisible to everybody?
One thought is that the same mental state/conscious state maps to many sets of input (sensory+previous state).

First, let's make sure I understood your post:
It's possible to imagine a brain in the exact same physical state but due to an entirely different set of external conditions. There could be an alien on planet X where everything is purple and the wind is always blowing, but the internal brain state that maps to his current sensory inputs (and previous mental state) just so happens to be the exact same state as my brain as I type this message.

You conclude that conscious states can't be due to the computation (state) because my typing this message must feel different than the alien on purple planet X where the wind is blowing, but are we sure they must feel different?


If a creature from just one of those environments compared how it felt t be in the two different environments, they would detect differences, but if we compared the internal state of each relative to their respective environments, it could possibly result in the same absolute state, but different relative state.
  #68  
Old 05-16-2019, 09:57 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by wolfpup View Post
I'm wondering if you're familiar with the computational theory of mind and the fact that it's currently considered to be a major foundation of modern cognitive science, although it has its detractors. If not, you might find the link interesting reading, or if you are, perhaps you can elaborate on your apparent view that it isn't possible.

Or perhaps I'm misunderstanding what you mean by "computational", but it's fairly well defined in theories of cognition. In the briefest possible nutshell, CTM proposes that many or most (though not necessarily all) of our cognitive processes are computational in the sense that they are syntactic operations on symbolic mental representations. As such, these processes would have the important property of multiple realizability -- the logical operations could be just as well realized on a digital computer.

If you read the CTM page you linked, you will see that the writer uses the term computation much more broadly than you do (e.g. "and neural network computation"). I don't think it helps the conversation to insist on a narrow definition tied to syntactic operations on symbolic representations. And as that page points out, the term symbol isn't even well defined.
  #69  
Old 05-16-2019, 10:31 PM
smiling bandit is offline
Guest
 
Join Date: Nov 2001
Posts: 16,953
I've thought about this particular sci-fi notion quite a bit, and I consider it utter bunkum. There are several interrelated reasons for this, but the all come out to one result. You may be able to create an AI, quite possibly even one that is programmed to believe it is/was a human. But that basically has a null value; it doesn't mean anything and what you have won't behave or react as that person would have.

A human being's conscious is embodied in the vibrant, if often frail, flesh. The human is all of that flesh, including the brain but not limited to it. Its nerves, muscles, stomach and so forth are all an integral part of the greater whole being. Sometimes, humans being humans, we sacrifices one part for the rest, but we are diminished thereby. But, ignoring that, I am deeply skeptical that the human brain can be simply replicated in an binary format. Hypothetically, an extremely powerful computational device could store all the data necessary for the human at a given point in time, though I find it questionable as to whether it could.

However, even giving all of that, you would not, in fact, have the person there. The machine, however good or accurate, is not the human being. Its existence would be completely separable from the actual human life. Whether it is a "good" or "bad" thing wouldn't be precisely relevant here; it just wouldn't be the same thing as the human being. It would be as if I had a real gold bar placed on a desk, and a perfect digital image of that gold bar in a computer running in Second Life or whatever. The image might be good, or it might be bad, or it might be indifferent. It is not, however, an actual gold bar. It isn't a gold bar even if someone in the game values it exactly as much as a real gold bar. The two things are qualitatively different.

Or, to put it in another way, I see no moral or philosophical difference between that and, say, Cloning. You could clone yourself, creating a genetically identical being. Then you could, say, employ a team of psychologists, acting coaches, and educators to try and give it identical mental characteristics to yourself. However, the clone isn't you; her or her life is qualitatively different. The clone isn't necessarily good or bad per se, but you're going to a lot of trouble to try and arbitrarily force it to be the same as you. But the real living creature is naturally something quite different, even though it might share the same code.
  #70  
Old 05-16-2019, 10:38 PM
Mijin's Avatar
Mijin is offline
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 9,064
Quote:
Originally Posted by eburacum45 View Post
This is an example of Mijin's statement that both sides accuse each other of believing in souls. If there is anything non-computational in the human mind, what is that something? A soul? Something else? Perhaps we could call it wibble. So a human brain is a computer with wibble. How do you know that we can't make wibble and add it to the uploaded computational representation of a human mind?
Two things here.

Firstly, there is a distinction between a machine capable of information-processing, and a computer. All computers are machines but not all machines are computers (or not only computers).
You can be a 100% Physicalist yet believe that the mind cannot be duplicated in software and/or that such a mind would not be conscious.

But secondly, and more importantly, we just don't have a good model of what consciousness is yet. That's the real answer to the OP.
Knowing that the mind is a property of the brain, and mental states correspond to physical states is great and all, but still leaves us a long way short of the kind of model that could answer questions like the OP's directly.

Personally my WAG is that subjective experience will become a huge area of science someday. And an expert in "Subjective Mechanics" will laugh at how crude our understanding was, and that we could only see the two possibilities: consciousness is copied or moved.
But it's just my personal feeling. But regardless, in the meantime the answer is we don't know.

Last edited by Mijin; 05-16-2019 at 10:40 PM.
  #71  
Old 05-16-2019, 11:22 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Quote:
Originally Posted by wolfpup View Post
I'm sorry, but that's just not correct. Yes, Putnam did a complete about-face on the question of functionalism as the basis of CTM, but that was a long time ago, and since then many cognitive science researchers (the late Jerry Fodor among the more prominent) have made great strides in establishing CTM as a foundational basis for understanding cognition. To be clear, Fodor never thought it was a complete explanation for all cognitive phenomena -- and conflicting evidence persists about their computational basis -- but he correctly thought it would become an important one.

ETA: And Putnam wasn't really a founder of CTM, he was just a very influential early proponent. Many others carried it forward, then and now.
Quote:
Originally Posted by eburacum45 View Post
Yeah, Putnam has been refuted by Chalmers, and Chalmers has been refuted by Dennett, and so on; these concepts are still in their infancy, so don't declare the computational theory dead yet.
Neither Fodor's semantic account not Chalmers' counterfactuals really succeed in dispelling the issue raised by Putnam, though. Fodor, at least to my reading, was always somewhat cagey regarding precisely how it is that the symbolic vehicles manipulated in computation acquire their semantic content, but even if there is such an account, I don't see how it could result in one computation being the 'correct' one to associate with a physical system, given that it's perfectly possible to use that same system for different computations.

So the conclusion as originally posed by Putnam was too strong---not every physical system implements every finite state automaton, but if you can use a physical system to implement one computation, you can use it on the same basis to implement another. That's fatal to a computational theory of mind; if what computation a system implements is not an objective fact about that system, then what mind a brain implements is not an objective fact about that brain. But then, who or what 'uses' my brain to compute my mind (and only my mind)?

Quote:
Originally Posted by begbert2 View Post
I read it. It made little to no sense. I read it again. It continued to make little to no sense. I read your post responding to me which seemed to be trying to explain it again, and it still made little sense - though I responded to what vague sort of sense I seemed to detect within it.

Why don't you pretend I'm stupid and restate your notion of "computational" in really simple, clear, and straightforward terms? We can worry about how it relates to the brain later, just get clarity on your definition of the term first.
Computation is nothing but using a physical system to implement a computable (partial recursive) function. That is, I have an input x, and want to know the value of some f(x) for a computable f, and use manipulations on a physical system (entering x, pushing 'start', say) to obtain knowledge about f(x).

This is equivalent (assuming a weak form of Church-Turing) to a definition using Turing machines, or lambda calculus, or algorithms. What's more, we can limit us to computation over finite binary strings, since that's all a modern computer does. In this case, it's straightforward to show that the same physical system can be used to implement different computations (see below).

Quote:
Originally Posted by RaftPeople View Post
First, let's make sure I understood your post:
It's possible to imagine a brain in the exact same physical state but due to an entirely different set of external conditions. There could be an alien on planet X where everything is purple and the wind is always blowing, but the internal brain state that maps to his current sensory inputs (and previous mental state) just so happens to be the exact same state as my brain as I type this message.
No, the argument is that a given physical system S can't be said to exclusively implement some computation C, because while an agent A could use S to compute C, an agent B could use S to implement a different C'. Hence, my example: A uses S to implement binary addition, while another agent may use it to implement the function you get when you flip all the bit values, and yet another may interpret the value of the input and/or output bits differently, and so on.

This is a completely general conclusion. 'Binary addition' may be taken as a stand-in for the computation that generates a mind; thus, while one might hold that a certain device implements a mind, this is, in fact, dependent on how the system is interpreted. But if whether a device implements a mind is interpretation-dependent, then the CTM doesn't work: either, the process of interpretation is itself computational---then, it needs to be interpreted further, leading to an infinite regress. Or, the process is not computable: then, the CTM obviously doesn't capture everything about the mind, as it is capable of interpreting physical systems as computing, as executing operations on symbolic representations, and hence, possesses a non-computational capacity.

The CTM is very intuitively seductive: it seems to be capable of building a bridge between the physical (the computer) and the abstract (the computation), and it's a fair bet that some such bridge is needed to explain the mind. The mistake, however, is to assume that the way this bridge is built is in any way easier to explain than how it's built for the mind; indeed, the only way that holds up to simple examples of associating distinct computations with one and the same physical system (even concurrently) is to involve mind, and more accurately, interpretation. Thus, things turn out the other way around: mind is needed to explain how physical systems connect to the abstract computations; but then, computation can't be what underlies mind.
  #72  
Old 05-17-2019, 12:01 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by Half Man Half Wit View Post
No, the argument is that a given physical system S can't be said to exclusively implement some computation C, because while an agent A could use S to compute C, an agent B could use S to implement a different C'. Hence, my example: A uses S to implement binary addition, while another agent may use it to implement the function you get when you flip all the bit values, and yet another may interpret the value of the input and/or output bits differently, and so on.
Hmm. Maybe I'm not picturing it correctly, but that sounds the same.

In your example it's an electronic circuit whose states simultaneously support multiple different function results.

In my example it's two brains whose states simultaneously support multiple (apparently) different conscious states.

What am I misunderstanding?
  #73  
Old 05-17-2019, 02:33 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
Originally Posted by Mijin View Post
Personally my WAG is that subjective experience will become a huge area of science someday. And an expert in "Subjective Mechanics" will laugh at how crude our understanding was, and that we could only see the two possibilities: consciousness is copied or moved.
But it's just my personal feeling. But regardless, in the meantime the answer is we don't know.
I agree with this entirely. 'Subjective Mechanics' or 'Sentience Wrangling' is likely to become a major field of study in the centuries to come, and will produce results we can barely imagine. There will be 'wibble' in our cars, airplanes and spacecraft, not to mention our smartphones or two-way wrist radios, or whatever. And 'wibble' will come in a myriad of types and flavours.

But even if this all comes to pass, I still wouldn't guarantee that the uploading of consciousness will every be viable or desirable.
  #74  
Old 05-17-2019, 06:42 AM
Mijin's Avatar
Mijin is offline
Guest
 
Join Date: Feb 2006
Location: Shanghai
Posts: 9,064
Quote:
Originally Posted by eburacum45 View Post
But even if this all comes to pass, I still wouldn't guarantee that the uploading of consciousness will every be viable or desirable.
I agree with this.
No reason to assume at this point that minds can be transferred to another substrate and what exactly that might entail.

Quote:
Originally Posted by eburacum45 View Post
I agree with this entirely. 'Subjective Mechanics' or 'Sentience Wrangling' is likely to become a major field of study in the centuries to come, and will produce results we can barely imagine. There will be 'wibble' in our cars, airplanes and spacecraft, not to mention our smartphones or two-way wrist radios, or whatever. And 'wibble' will come in a myriad of types and flavours.
I can't tell whether this part is sarcasm. But I have not used 'wibble' nor suggested that consciousness is some kind of app.

I come from a neuroscience background, and all I am saying is that I think that at some point we will have a descriptive model of consciousness sufficient to unambiguously answer questions about what subjective experience is and how it arises.
And my gut feeling is that this model will require some kind of conceptual jump; that the problem will only become tractable when we frame it in a new way. You can absolutely disagree with this feeling; it's not based on anything other than the observation that questions on consciousness seem like very different questions to the kind that science has so far managed to tackle well.
But no, I'm not positing a soul, or magic.

Last edited by Mijin; 05-17-2019 at 06:47 AM.
  #75  
Old 05-17-2019, 08:23 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,956
Quote:
Originally Posted by Half Man Half Wit View Post
Neither Fodor's semantic account not Chalmers' counterfactuals really succeed in dispelling the issue raised by Putnam, though. Fodor, at least to my reading, was always somewhat cagey regarding precisely how it is that the symbolic vehicles manipulated in computation acquire their semantic content, but even if there is such an account, I don't see how it could result in one computation being the 'correct' one to associate with a physical system, given that it's perfectly possible to use that same system for different computations.

So the conclusion as originally posed by Putnam was too strong---not every physical system implements every finite state automaton, but if you can use a physical system to implement one computation, you can use it on the same basis to implement another. That's fatal to a computational theory of mind; if what computation a system implements is not an objective fact about that system, then what mind a brain implements is not an objective fact about that brain. But then, who or what 'uses' my brain to compute my mind (and only my mind)?
Two points.

1. Fodor was hardly being "cagey". That symbolic operands possess semantic qualities is the very essence of what computation **is**. The semantic attributes of symbolic representations are endowed by the very processes that manipulate them. For example, a computer doing image processing endows visual semantics to generic symbols that are otherwise just meaningless ordinary bits and bytes. This is neither mysterious nor magical.

2. The fact that "if you can use a physical system to implement one computation, you can use it on the same basis to implement another" is in no way "fatal" to the computational theory of mind. In fact, it's intrinsic to it. It's closely related to the central CTM principle of multiple realizability; in the same way that a computational system can implement multiple kinds of computation, the computations in one such physical system can be identically realized in another.

Somewhat related to #2 is the silicon chip replacement thought experiment, sort of a cognitive-science version of the Ship of Theseus identity problem. We should in theory be able to replace an individual neuron in a human brain with a silicon microchip that replicates all its functions. If the prosthetic works as intended, the individual would experience no change in perception or consciousness. Now continue the process until more and more neurons are replaced with microchips, until the entire brain is comprised solely of silicon microchips. At what point, if any, does it stop being an actual brain? At what point would the individual perceive any difference?
  #76  
Old 05-17-2019, 11:26 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Quote:
Originally Posted by RaftPeople View Post
Hmm. Maybe I'm not picturing it correctly, but that sounds the same.

In your example it's an electronic circuit whose states simultaneously support multiple different function results.

In my example it's two brains whose states simultaneously support multiple (apparently) different conscious states.

What am I misunderstanding?
It may be that I'm misunderstanding you, but the situation you described was one in which the same physical state (of, presumably, identical brains, otherwise one could hardly speak of the same physical state) is produced by different causal factors, presumably leading to the same mental states---which doesn't seem plausible, to me: if the alien's brain is the same kind as ours, it should also react to the same kind of causal influences in the same way, and thus, in an envrionment in which it is subject to stimuli that would put our brain into a state of perceiving purple things and howling winds, should likewise be in a state of perceiving purple things and howling winds.

Or do you mean that the alien's sensory apparatus is such that the signals it sends are traduced such as to be equivalent, in the case of being subject to purple-stuff-and-howling-wind stimuli, to the signals our senses send in the case of being subject to composing-posts-on-message-boards stimuli?

That, I'd say, is a different question, roughly analogous to the case of a brain in a vat.

What I mean is, rather, a system that's in the same physical state, while supporting different semantic interpretations. That is, a light that's on being both interpretable as signaling '1' and '0', and thus, yielding to different computations being implemented depending on how it is, in fact, interpreted.

The interpretation being a key component here: nobody interprets brains as implementing minds (the notion would lead to circularity).

Quote:
Originally Posted by wolfpup View Post
Two points.

1. Fodor was hardly being "cagey". That symbolic operands possess semantic qualities is the very essence of what computation **is**. The semantic attributes of symbolic representations are endowed by the very processes that manipulate them. For example, a computer doing image processing endows visual semantics to generic symbols that are otherwise just meaningless ordinary bits and bytes. This is neither mysterious nor magical.
It's at least mysterious in so far as nobody knows how physical systems can come to represent even bits and bytes, much less visual systems. This is something usually glossed over by proponents of symbolic approaches to computation, but it's in fact the key question.

That a computer does image processing is not a fact about the computer, i. e. the physical system, but rather, about how its symbolic vehicles are implemented. That's shown by the fact that you can interpret them differently---if you were to claim, for instance, that the system I've proposed 'endows arithmetic semantics to generic symbols' by implementing binary addition, I can point you to an interpretation that's as justified as yours, and yet, doesn't have anything to do with addition.

Quote:
2. The fact that "if you can use a physical system to implement one computation, you can use it on the same basis to implement another" is in no way "fatal" to the computational theory of mind. In fact, it's intrinsic to it. It's closely related to the central CTM principle of multiple realizability; in the same way that a computational system can implement multiple kinds of computation, the computations in one such physical system can be identically realized in another.
It's the opposite of multiple realizability, in fact (related to Newman's objection to Russell's causal theory of perception). The problem isn't that there's no unique computation that can be associated to a system, but rather, that associating any computation whatever to a physical system requires an act of interpretation and thus, the exercise of a mental capacity. Thus, the attempt to explain mind in terms of computation simply collapses in on itself, as one has to appeal to mind to explain computation, first.

Quote:
Somewhat related to #2 is the silicon chip replacement thought experiment, sort of a cognitive-science version of the Ship of Theseus identity problem. We should in theory be able to replace an individual neuron in a human brain with a silicon microchip that replicates all its functions. If the prosthetic works as intended, the individual would experience no change in perception or consciousness. Now continue the process until more and more neurons are replaced with microchips, until the entire brain is comprised solely of silicon microchips. At what point, if any, does it stop being an actual brain? At what point would the individual perceive any difference?
I have no objections to a silicon brain being conscious. This isn't in tension with the fact that consciousness isn't computational.
  #77  
Old 05-17-2019, 11:28 AM
thorny locust's Avatar
thorny locust is offline
Guest
 
Join Date: Apr 2019
Location: Upstate New York
Posts: 1,163
Quote:
Originally Posted by Mijin View Post
[. . . ]what normally happens is half the responses in a thread like this will be "Obviously the mind has been downloaded and anyone that thinks otherwise must think there is some magical soul or something" and the other half will be sure that "Obviously minds cannot be "moved", and anyone that thinks otherwise must think there is some magical soul or something".
Quote:
Originally Posted by Alessan View Post
Not true - plenty of people will say, "Prove there isn't a magical soul or something."
At least two people have posted in this thread (posts 46 and 69) arguing that the conscious mind is not the whole of the self, on a purely physical basis with no need for any reference to a "magical soul".
  #78  
Old 05-17-2019, 11:35 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Quote:
Originally Posted by Half Man Half Wit View Post
That a computer does image processing is not a fact about the computer, i. e. the physical system, but rather, about how its symbolic vehicles are implemented.
That should've been 'interpreted'.

Last edited by Half Man Half Wit; 05-17-2019 at 11:35 AM.
  #79  
Old 05-17-2019, 11:43 AM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by Half Man Half Wit View Post
It may be that I'm misunderstanding you, but the situation you described was one in which the same physical state (of, presumably, identical brains, otherwise one could hardly speak of the same physical state) is produced by different causal factors, presumably leading to the same mental states---which doesn't seem plausible, to me: if the alien's brain is the same kind as ours, it should also react to the same kind of causal influences in the same way, and thus, in an envrionment in which it is subject to stimuli that would put our brain into a state of perceiving purple things and howling winds, should likewise be in a state of perceiving purple things and howling winds.
You are understanding what I was thinking. It seems like the only difference between your circuit example and my brain example is one of scale.

The detection of light signals may be different at step 1, but the signal forwarded from that step loses any connection to the color (as you note in your point, the interpretation is relative). Thus, it could be that many different environments result in the same internal set of signals (beyond the initial detection).

Last edited by RaftPeople; 05-17-2019 at 11:43 AM.
  #80  
Old 05-17-2019, 11:49 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
Originally Posted by Half Man Half Wit View Post
The problem isn't that there's no unique computation that can be associated to a system, but rather, that associating any computation whatever to a physical system requires an act of interpretation and thus, the exercise of a mental capacity.
I'm not interested in all the other interpretations, thank you very much; just the one that makes the pixels light up on my screen. Or are you suggesting that my laptop is conscious? It certainly doesn't rely on my consciousness to 'interpret' which computation to choose - the design does that.
  #81  
Old 05-17-2019, 11:58 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Quote:
Originally Posted by eburacum45 View Post
I'm not interested in all the other interpretations, thank you very much; just the one that makes the pixels light up on my screen. Or are you suggesting that my laptop is conscious? It certainly doesn't rely on my consciousness to 'interpret' which computation to choose - the design does that.
The computation is how the pixels on your screen are interpreted. Saying that an interpretation makes them light up is nonsensical. They're the lights on the device I proposed, which, if they're interpreted differently, lead to different computations being performed.
  #82  
Old 05-17-2019, 12:23 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Quote:
Originally Posted by RaftPeople View Post
You are understanding what I was thinking. It seems like the only difference between your circuit example and my brain example is one of scale.

The detection of light signals may be different at step 1, but the signal forwarded from that step loses any connection to the color (as you note in your point, the interpretation is relative). Thus, it could be that many different environments result in the same internal set of signals (beyond the initial detection).
I don't think I get it, I'm afraid. In my example, different interpreters will be in different brain states, as they will have different beliefs---say, one believing that 'light on means 1', and another, 'light on means 0'. So I don't see the connection.
  #83  
Old 05-17-2019, 03:23 PM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Quote:
Originally Posted by Half Man Half Wit View Post
The computation is how the pixels on your screen are interpreted. Saying that an interpretation makes them light up is nonsensical. They're the lights on the device I proposed, which, if they're interpreted differently, lead to different computations being performed.
But the same lights still light up, so the other interpretations are irrelevant.
Putnam said 'every ordinary open system realizes every abstract finite automaton'. I pointed out that this is irrelevant in a laptop computer, because only one set of symbols light up. By extension it is irrelevant inside your head too.
  #84  
Old 05-17-2019, 03:27 PM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Quote:
Originally Posted by eburacum45 View Post
But the same lights still light up, so the other interpretations are irrelevant.

Putnam said 'every ordinary open system realizes every abstract finite automaton'. I pointed out that this is irrelevant in a laptop computer, because only one set of symbols light up. By extension it is irrelevant inside your head too.
It's not a matter of which lights light up, but of what those lights are interpreted as. If you interpret a light being lit as meaning 1 vs. meaning 0, what's being computed will differ.
  #85  
Old 05-17-2019, 03:50 PM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
This is true - the human brain/mind interface is remarkably unreliable - that's why people see so many UFOs.
Dennett's interpretation seems to describe this best- a leaky sieve that somehow produces works of genius.
  #86  
Old 05-17-2019, 03:51 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,956
Quote:
Originally Posted by Half Man Half Wit View Post
It's at least mysterious in so far as nobody knows how physical systems can come to represent even bits and bytes, much less visual systems. This is something usually glossed over by proponents of symbolic approaches to computation, but it's in fact the key question.

That a computer does image processing is not a fact about the computer, i. e. the physical system, but rather, about how its symbolic vehicles are implemented. That's shown by the fact that you can interpret them differently---if you were to claim, for instance, that the system I've proposed 'endows arithmetic semantics to generic symbols' by implementing binary addition, I can point you to an interpretation that's as justified as yours, and yet, doesn't have anything to do with addition.
No, proponents of CTM aren't "glossing over" anything. The computational proposition is solely about whether or not cognitive processes are essentially symbolic and hence subject to multiple realizability, say, on digital computers. For example, the essence of the debate about how the mind processes mental images is whether it's symbolic-representational in just this way, or whether it's something that must involve the visual cortex -- say, producing some kind of analog signal that is then reprocessed through the visual cortex. There is lots of experimental evidence for the symbolic-representational interpretation which supports the computational model, primarily based on very significant empirically observed fundamental differences between mental images and visual ones.

FTR, there are also papers reporting conflicting results that have led to a continuing debate. Thus there are researchers who argue against the symbolic-representational model of mental image processing, Stephen Kosslyn being among the more prominent. My fair and balanced position on this matter is that these individuals are morons.

Quote:
Originally Posted by Half Man Half Wit View Post
It's the opposite of multiple realizability, in fact (related to Newman's objection to Russell's causal theory of perception). The problem isn't that there's no unique computation that can be associated to a system, but rather, that associating any computation whatever to a physical system requires an act of interpretation and thus, the exercise of a mental capacity. Thus, the attempt to explain mind in terms of computation simply collapses in on itself, as one has to appeal to mind to explain computation, first.
This seems to me rather incoherent, but perhaps I'm not understanding it. It sounds a lot like the homunculus fallacy.

Quote:
Originally Posted by Half Man Half Wit View Post
I have no objections to a silicon brain being conscious. This isn't in tension with the fact that consciousness isn't computational.
No, the whole intent of the silicon chip replacement thought experiment is that the brain ultimately becomes comprised of nothing but computational components. If your argument is that something more profound has happened, well, I would agree that something very profound has happened that is not present in the individual computational modules, but that "something" is called "emergent properties of complexity".
  #87  
Old 05-17-2019, 05:23 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by wolfpup View Post
No, proponents of CTM aren't "glossing over" anything. The computational proposition is solely about whether or not cognitive processes are essentially symbolic and hence subject to multiple realizability, say, on digital computers.
Which cognitive processes?

What is a symbol?

(note: your linked page reminds us that the terms computation and symbol are not well defined).
  #88  
Old 05-17-2019, 05:57 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,956
Quote:
Originally Posted by RaftPeople View Post
Which cognitive processes?

What is a symbol?

(note: your linked page reminds us that the terms computation and symbol are not well defined).
A "symbol" is a token -- an abstract unit of information -- that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. The relationship to meaningful things -- the semantics -- is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program.

"Which cognitive processes?" Probably many, but perhaps not all. Mental image processing is a frequently cited basis of discussion. Here the distinction is whether we remember images in the visual manner of a Polaroid photograph, where it has to be processed through the visual cortex, or whether we render them symbolically, like a JPEG file, and subsequently process them via what Fodor has called "the language of thought".

This excerpt from Fodor's The Mind Doesn't Work That Way might be of interest:
The cognitive science that started fifty years or so ago more or less explicitly had as its defining project to examine the theory—largely owing to Turing—that cognitive mental processes are operations defined on syntactically structured mental representations that are much like sentences. The proposal was to use the hypothesis that mental representations are language-like to explain certain pervasive and characteristic properties of cognitive states and processes; for example, that the former are productive and systematic, and that the latter are, by and large, truth preserving. Roughly, the systematicity and productivity of thought were supposed to trace back to the compositionality of mental representations, which in turn depends on the constituent structure of their syntax. The tendency of mental processes to preserve truth was to be explained by the hypothesis that they are computations, where, by stipulation a computation is a causal process that is syntactically driven.

I think that the attempt to explain the productivity and systematicity of mental states by appealing to the compositionality of mental representations has been something like an unmitigated success ...
  #89  
Old 05-17-2019, 06:02 PM
begbert2 is online now
Guest
 
Join Date: Jan 2003
Location: Idaho
Posts: 13,120
Quote:
Originally Posted by Half Man Half Wit View Post
It's not a matter of which lights light up, but of what those lights are interpreted as. If you interpret a light being lit as meaning 1 vs. meaning 0, what's being computed will differ.
So what? For a given brain with a given physical state, the way that the lights are interpreted is fixed - it's determined based on the cognitive and physical state of the brain and how all the dominoes in there are hitting each other. The fact that a different brain or the same brain in a different cognitive state might interpret things differently doesn't in the slightest imply that computation can't take place - and it doesn't imply that the computation/cognition/whateveryoucallit can't be copied or duplicated.

Seriously, I'm a computer programmer and calculations are context-sensitive all the time. When you click your mouse button that's the same action, the same event, but the way the computer reacts to that event varies wildly depending on the computer's state - where it thinks the mouse is, what programs it's running, how its internal state maps those program's 'windows' to the clickable area, how the programs choose to react to mouse clicks. It's all wildly variable and all entirely programmable.
  #90  
Old 05-17-2019, 06:31 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by wolfpup View Post
A "symbol" is a token -- an abstract unit of information -- that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. The relationship to meaningful things -- the semantics -- is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program.
Because it's the theory of mind (which is based on the brain), let's try to make it more concrete, are these symbols like a bit in a computer?

A neuron firing once?
The rate a neuron fires over some time period?
The modification of synaptic activity by glial cells?

If so, is every component of the brain a symbol?

Last edited by RaftPeople; 05-17-2019 at 06:31 PM.
  #91  
Old 05-17-2019, 06:50 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,956
I think you're missing the point here. Cognitive science strives to provide a functional -- or, in computer science terms, an "architectural" -- rather than a neurophysiological account of cognition. The underlying biological minutiae are obviously important in many respects, but completely irrelevant at this level.
  #92  
Old 05-17-2019, 07:10 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
How would you know if the theory is correct or has any value if you don't connect it to reality?

How can you confirm whether the brain uses symbolic processing for a specific function if you can't map the elements of the theory to the brain?
  #93  
Old 05-18-2019, 02:46 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Quote:
Originally Posted by wolfpup View Post
No, proponents of CTM aren't "glossing over" anything. The computational proposition is solely about whether or not cognitive processes are essentially symbolic and hence subject to multiple realizability, say, on digital computers.
Multiple realizability has nothing to do with symbols, but with functional properties (or states, or events). The functional property of 'being a pump' is multiply realizable---say, in a mechanical device, or a heart. The important notion is that the behavior, the ways the states of a system connect, must be invariant between different realizations.

Quote:
There is lots of experimental evidence for the symbolic-representational interpretation which supports the computational model, primarily based on very significant empirically observed fundamental differences between mental images and visual ones.
That you can use computation to model aspects of the brain's behavior doesn't entail that what the brain does is computation any more than that you can tell a story about how a brain does what it does entails that the brain's function is story-telling. It's a confusion of the map for the territory, like saying that because we can draw maps of some terrain, the terrain itself must just be some huge, extremely detailed map. But it's not: we can merely use one as a model of the other.

Quote:
This seems to me rather incoherent, but perhaps I'm not understanding it. It sounds a lot like the homunculus fallacy.
That's not a bad intuition. It exposes a similar problem for the computationalist view as the homunculus exposes for naive representationalist theories of vision---namely, a vicious regress, where you'd have to complete an infinite tower of interpretational agencies in order to fix what the system at the 'base level' computes.

Quote:
No, the whole intent of the silicon chip replacement thought experiment is that the brain ultimately becomes comprised of nothing but computational components.
I agree that that's the point the thought experiment seeks to build intuition for, it's just that it fails: you don't replace the neurons with computations, you replace them with machines. Again, think about the map/territory analogy: say you have two territories described by the same map, then you can replace bits of one by bits of the other, and still have an isomorphic map describing the resulting mashup; but that doesn't tell you that therefore, maps tell you all there's to know about the territory.

Quote:
If your argument is that something more profound has happened, well, I would agree that something very profound has happened that is not present in the individual computational modules, but that "something" is called "emergent properties of complexity".
Ah yes, here comes the usual gambit: we can't actually tell what happens, but we're pretty sure that if you smoosh just enough of it together, consciousness will just spark up somehow.

Quote:
Originally Posted by wolfpup View Post
A "symbol" is a token -- an abstract unit of information -- that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. The relationship to meaningful things -- the semantics -- is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program.
Wouldn't it be nice if that were actually possible! But of course, syntax necessarily underdetermines semantics (and radically so). All that syntax gives us is a set of relations between symbols---rules of replacing them, and so on. But (as pointed out by Newman) a set of relations can't fix anything about the things standing in those relations other than how many there (minimally) need to be.

However, you still haven't really engaged with the argument I made. I'll give a more fully worked out version below, and I'd appreciate it if you could tell me what you consider to be wrong with it. It's somewhat disconcerting to have proposed an argument for a position, and then, for nearly sixty posts, get told how wrong you are without anybody even bothering to consider the argument.

Quote:
Originally Posted by begbert2 View Post
So what? For a given brain with a given physical state, the way that the lights are interpreted is fixed - it's determined based on the cognitive and physical state of the brain and how all the dominoes in there are hitting each other. The fact that a different brain or the same brain in a different cognitive state might interpret things differently doesn't in the slightest imply that computation can't take place - and it doesn't imply that the computation/cognition/whateveryoucallit can't be copied or duplicated.
Well, that's not quite what I claimed (but I try to make the argument more clearly below). However, you're already conceding the most important element of my stance---that you need an external agency to fix what a system computes. Let's investigate where that leads.

Either, the way the external agency fixes the computation is itself computational (say, taking as input a state of a physical system, and producing as output some abstract object corresponding to a computational state), or it's not. In the latter case, computationalism is patently false, so we can ignore that.

So suppose that a computation is performed in order to decide what the original system computes. Call that computation M. But then, as we had surmised, computations rely on some further agency fixing them to be definite. So, in order to ensure that (say) a brain computes M, which ensures that the original object computes whatever the owner of the brain considers it to compute, there must be some agency itself fixing that the brain computes M. Again, it can do so computationally, or not. Again, only the first case is of interest.

So suppose the further agency performs some computation M' in oder to fix the brain's computing of M. But then, we need some further agency to fix that it does, in fact, compute M'. And, I hope, you now see the issue: if a computation depends on external facts to be fixed, these facts either have to be non-computational themselves, or we are led to an infinite regression. In either case, computationalism is false.

But I think there's still some confusion about the original argument I made (if there weren't, you'd think somebody of those convinced it's false would have pointed out its flaws in the sixty posts since).

So suppose you have a device, D, consisting of a box that has, on its front, four switches in a square array, and three lights. Picture it like this:

Code:
 -----------------------------
|                             |
|  (S11)(S12)                 |
|                (L1)(L2)(L3) |
|  (S21)(S22)                 |
|                             |
 -----------------------------
Here, S11 - S22 are the four switches, and L1-L3 are the three lights.

The switches can either be in the state 'up' or 'down', and the lights either be 'on' or 'off'. If you flip the switches, the lights change.

How do you figure out what the system computes? Well, you'd have to make a guess: say, you guess that 'up' means '1', 'down' means '0', 'on' means '1', and 'off' means '0'. Furthermore, you suppose that each of the rows of switches, as well as the row of lights, represents a binary number (S11 being the 21, and S12 the 20-valued bit, and analogous for the others). Call the number represented by (S11, S12) x1, and the number represented by (S21, S22) x2. You then set out to discover what function f(x1, x2) is implemented by your device. So, you note down the behavior:

Code:
x1   x2   |   f(x1, x2)
-----------------------
0    0    |       0
0    1    |       1
0    2    |       2
0    3    |       3
0    0    |       0
1    1    |       2
1    2    |       3
1    3    |       4
2    0    |       2
2    1    |       3
2    2    |       4
2    3    |       5
3    0    |       3
3    1    |       4
3    2    |       5
3    3    |       6
Thus, you conclude that the system performs binary addition. You're justified in that, of course: if you didn't know what, say, the sum of 2 and 3 is, you could use the device to find out. This is exactly how we use computers to compute anything.

But of course, your interpretation is quite arbitrary. So now I tell you, no, you got it wrong: what it actually computes is the following:

Code:
x1   x2   |  f'(x1, x2)
-----------------------
0    0    |       0
0    2    |       4
0    1    |       2
0    3    |       6
2    0    |       4
2    2    |       2
2    1    |       6
2    3    |       1
1    0    |       2
1    2    |       6
1    1    |       1
1    3    |       5
3    0    |       6
3    2    |       1
3    1    |       5
3    3    |       3
Now, how on Earth do I reach that conclusion? Well, simple: I kept up the identification of 'up' and 'on' to mean '1' (and so on), but simply took the rightmost bit to represent the highest value (i. e. (L3) not represents 22, and likewise for the others). So, for instance, the switch state
(S11 = 'up', S12 = 'down') is interpreted as (1, 0), which however represents 1*20 + 0*21 = 1, instead of 1*21 + 0*20 = 2.

I haven't changed anything about the device; merely how it's interpreted. That's sufficient: I can use the system to compute f'(x1, x2) just as well as you can use it to compute f(x1, x2).

This is a completely general conclusion: I can introduce changes of interpretation for any computational system you claim computes some function f to use it to compute a different f' in just the same manner.

Consequently, what a system computes isn't inherent to the system, but is only fixed upon interpreting it---taking certain of its physical states to have symbolic value, and fixing what the symbols mean.

If, thus, mind is due to computation, brains would have to be interpreted in the right way to produce minds. What mind a brain implements, and whether it implements a mind at all, is then not decided by the properties of the brain alone; it would have to be a relational property, not an intrinsic one.

That's a bullet I can see somebody bite, but it gets worse from there: for if how we fix the computation of the device D is itself computational, say, realized by some computation M, then our brains would have to be interpreted in the right way to compute M. But then, we are already knee deep in a vicious regress that never bottoms out.

Consequently, the only non-incoherent way to fix computations is via non-computational means. (I should point out that I don't mean hypercomputation or the like, here: the same argument can be applied in such a case.) But then, the computational theory is right out of the window.
  #94  
Old 05-18-2019, 05:05 AM
eburacum45 is offline
Guest
 
Join Date: Feb 2003
Location: Old York
Posts: 2,868
Here's a page from the Stanford Encyclopedia about this problem.
https://plato.stanford.edu/entries/c...ysicalsystems/
Fascinating stuff, but as you can see, the debate has moved on a long way from Putnam's ideas. Certain computations are ontologically privileged, and they are the only ones we should be interested in.

And there is more.

If I have two laptops, and they both show the same symbols on the screen as a result of pressing the same keys on the keyboard, I don't care what route the computation has taken, so long as the answers are consistent. Perhaps some of the more complex routes to the end result cause more waste heat to be emitted by the processor. But if the process can be made more efficient, that is a benefit.
Similarly an analog of a human mind on a computer might use a simpler method of computation than the biological substrate, but if the end result (the symbols on the laptop screen) are the same, then the mind can be said to be successfully modelled. The important factor is the end result of the computation, not the computation itself. If the simulation behaves in the same way as the original, then the simulation is successful.

Ah, you might say- only another human could determine whether the simulation was accurate, or realistic; I don't think that is the case. A sufficiently well-programmed computer could observe the behaviours (the outputs) of the simulated mind, and compare those behaviours with the behaviours of the original. If the computerised monitoring system were sufficiently well programmed it could detect differences in the simulation's behaviour much better than a human could. Researchers are already developing programs that do this sort of thing, to detect criminal and terrorist behaviour, for instance.

In short- you don't need to have a human consciousness to observe the results of a computation to discriminate between a good simulation and a bad one, so I do not see the necessity for a non-computational element at any stage in the process. Given a few thousand years of technological development, we could all exist as programs monitoring each other's behaviour for signs of inconsistency. Another reason not to opt for downloading/uploading.
  #95  
Old 05-18-2019, 06:39 AM
Half Man Half Wit's Avatar
Half Man Half Wit is online now
Guest
 
Join Date: Jun 2007
Posts: 6,831
Quote:
Originally Posted by eburacum45 View Post
Here's a page from the Stanford Encyclopedia about this problem.
https://plato.stanford.edu/entries/c...ysicalsystems/
Fascinating stuff, but as you can see, the debate has moved on a long way from Putnam's ideas.
The SEP is a good first resource to get an overview about an unfamiliar topic. If you want to dive a little deeper into the matter, I'd suggest reading the review article by Godfrey-Smith about 'Triviality arguments against functionalism', which considers Putnam's early attack and more modern developments.

The entire section 3 of the SEP-article, by the way, is occupied with worries such as the one I'm presenting, so it's very much a current issue.

Quote:
Certain computations are ontologically privileged, and they are the only ones we should be interested in.
OK, so what makes a computation ontologically privileged? And which of the two I presented above is the right one? Or is it any of the many others that can be obtained in a similar manner?

Quote:
If I have two laptops, and they both show the same symbols on the screen as a result of pressing the same keys on the keyboard, I don't care what route the computation has taken, so long as the answers are consistent.
That's not the issue at all, though. The two laptops will show exactly the same symbols; the question is how these symbols are interpreted. Only there does what has been computed get fixed.

Really, you should try to work through the example I gave above, that will make it more clear.
  #96  
Old 05-18-2019, 12:58 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by Half Man Half Wit View Post
That's not the issue at all, though. The two laptops will show exactly the same symbols; the question is how these symbols are interpreted. Only there does what has been computed get fixed.
Although I believe I understand your example, that the same system can be said to compute multiple functions, I'm not certain about the conclusion.

Thoughts:
Your box example (and my brain example) are at their cores just input to output mappings. We want to attach names to the set of mappings (e.g. binary addition) which introduces the issue of an external agent being required to choose the specific name for what is being computed.

This is where I'm not sure about your conclusion. I believe your conclusion is that consciousness can't be said to be created by a specific computation because that very computation is also the computation used for function XYZ, just like you box computes multiple functions simultaneously because they share a mapping of input to output (if you map your problems input to the input and output to the output correctly).

I believe this is the same argument you mentioned one other time that if we map inputs and outputs properly then a rock can perform any computation. But in reality the computation just got pushed to the input and output mapping.


So, in summary:
1 - The mapping of function input to the machinery's input and then machinery's output to function output has computation embedded in the mappings to input and output that are external to your machine. You would need to consider the entire system. Or if there were no mappings required, then we just happen to have given multiple names to the same function.

2 - You state that the interpretation requires an external agent, but that is only to provide the additional computations embedded in the mappings into and out of your machine system. If we consider the entire system/function, is there still a need for an external agent? Isn't the external agent just giving a name to the function or validating that it's working?

3 - Even if we consider the entire system, there are still common computations that can serve many different purposes. In a beetle there could be function X that takes 8 inputs and spits out 3 outputs that servers some larger process, and in a fish that exact same mapping could be applied in a different area of the brain serving a different larger purpose. Is it really a problem if the same conscious state can arise in many different environments (this is my alien purple world example). The beetle and the fish share some mappings, why is consciousness so special that the mappings can't be shared in different environments?
  #97  
Old 05-18-2019, 01:46 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
Originally Posted by wolfpup View Post
For example, the essence of the debate about how the mind processes mental images is whether it's symbolic-representational in just this way, or whether it's something that must involve the visual cortex -- say, producing some kind of analog signal that is then reprocessed through the visual cortex. There is lots of experimental evidence for the symbolic-representational interpretation which supports the computational model, primarily based on very significant empirically observed fundamental differences between mental images and visual ones.
Point #1:
There has been a lot of research on this topic in the last 20 years. If you are truly interested in understanding how the brain handles mental imagery, you should read this research.

The evidence is pretty clear that the visual processing areas are activated for mental imagery in the same way that sensory images activate those areas. The stronger the mental image (as measured by tasks) the stronger the activation. In addition, tasks related to processing mental imagery have shown that the content of novel constructed images is used for higher level processing.

If you understand what the neuroscientists are finding out about the brain in general, it does make sense: the brain seems to process sensory input forward while also sending signals about expected future state backwards. It makes sense (not proven but seem logical) that the mental imagery function would piggy back on that same mechanism, thus efficiently making use of machinery that is already in place.

This is not to say that other forms of processing aren't also in use during mental imagery tasks, but your statement (and previous statements on this topic) don't reflect current research from many different researchers.



Point #2:
If you can't state how a symbol is represented in the brain and how it maps to neural processing, how can you state with such 100% certainty (e.g. people that disagree with your position are morons) that imagery is being processed symbolically?
  #98  
Old 05-18-2019, 02:12 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,956
Quote:
Originally Posted by Half Man Half Wit View Post
So suppose the further agency performs some computation M' in oder to fix the brain's computing of M. But then, we need some further agency to fix that it does, in fact, compute M'. And, I hope, you now see the issue: if a computation depends on external facts to be fixed, these facts either have to be non-computational themselves, or we are led to an infinite regression. In either case, computationalism is false.
Well, no, that conclusion is true only if you assume the need for the aforementioned agency, or interpreter, as a prerequisite for computation, a notion that I rejected from the beginning -- a notion that, if true, would undermine pretty much the whole of CTM and most of modern cognitive science along with it.

I read your example but I don't see it as supporting that notion in any way, let alone being a "completely general" conclusion. The problem with your example is that it's a kind of sleight-of-hand where you sneakily change the implied definition of the "computation" that the box is supposed to be performing. The box has only switches and lights. It knows nothing about numbers. So the "computation" it's doing is accurately described either by your first account (binary addition) or the second one, or any other that is consistent with the same switch and light patterns. It makes no difference. They are all exactly equivalent. The fact remains that the fundamental thing that the box is doing doesn't require an observer to interpret, and neither does any computational system. The difference with real computational systems, including the brain, is that there is a very rich set of semantics associated with their inputs and outputs which makes it essentially impossible to play the little game of switcheroo that you were engaging in.

Quote:
Originally Posted by Half Man Half Wit View Post
Quote:
If your argument is that something more profound has happened, well, I would agree that something very profound has happened that is not present in the individual computational modules, but that "something" is called "emergent properties of complexity".
Ah yes, here comes the usual gambit: we can't actually tell what happens, but we're pretty sure that if you smoosh just enough of it together, consciousness will just spark up somehow.
FTR, I don't claim to have solved the problem of consciousness. However, as you well know, emergent properties are a real thing, and if one is hesitant to say "that's why we have consciousness", we can at least say that emergent properties are a very good candidate explanation of attributes like that which appear to exist on a continuum in different intelligent species to an extent that empirically appears related to the level intelligence. They are a particularly good candidate in view of the fact that there is not even remotely any other plausible explanation, other than "mystical soul" or "magic".
  #99  
Old 05-18-2019, 02:26 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 10,956
Quote:
Originally Posted by RaftPeople View Post
Point #1:
There has been a lot of research on this topic in the last 20 years. If you are truly interested in understanding how the brain handles mental imagery, you should read this research.
I have, and that's why I hold the position I do. But thanks for the suggestion.
Quote:
Originally Posted by RaftPeople View Post
Point #2:
If you can't state how a symbol is represented in the brain and how it maps to neural processing, how can you state with such 100% certainty (e.g. people that disagree with your position are morons) that imagery is being processed symbolically?
Because that level of biological minutiae is irrelevant to a functional description of cognition, as I already said. I can be 100% certain of what my computer will do when I type a particular command, without having any understanding of how its logic gates are implemented.

BTW, the "moron" comment was intended to be tongue-in-cheek. I thought it was obvious.
  #100  
Old 05-18-2019, 02:49 PM
RaftPeople is offline
Guest
 
Join Date: Jan 2003
Location: 7-Eleven
Posts: 6,705
Quote:
BTW, the "moron" comment was intended to be tongue-in-cheek. I thought it was obvious.
Understood, but it still indicates the level of certainty you have in your position, that was my point, not concern about the word itself.


Quote:
Originally Posted by wolfpup View Post
Because that level of biological minutiae is irrelevant to a functional description of cognition, as I already said. I can be 100% certain of what my computer will do when I type a particular command, without having any understanding of how its logic gates are implemented.
Then how did you link your functional description of cognition to what is NOT going on inside the biology previously (not visual area, no analog signals)? You seem to want to make statements sometimes about what is going on in the biology, and then at other times you want to state that it's irrelevant.

Help me understand your position, when is the biology relevant for understanding human cognition and when is it not relevant?

Isn't it putting the cart before the horse to decide in advance that the machinery is doing X without understanding how the machine works?

How can CTM ever make a prediction about the system if you can't ground it in reality?

What predictions does CTM make that can be used to determine if mental imagery is symbolic or not?

Again, how can you have a theory that doesn't even have a concrete understanding of what is a symbol and what isn't a symbol?
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 02:08 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright © 2018 STM Reader, LLC.

 
Copyright © 2017