FAQ |
Calendar |
![]() |
|
![]() |
#101
|
||||
|
||||
Quote:
CTM is absolutely grounded in reality based on extensive experimental results. You can't make blanket statements like "understanding how the machine works" when there is a very deep hierarchy of different levels of implementation. Regarding CTM and mental imagery, CTM makes a whole host of predictions that have turned out to be accurate, such as with perceptions of optical illusions, or rotating objects in mental images, all of which provide evidence that the image is being reconstructed in the manner of a computer rather than "projected" and viewed by an internal mind's eye. I gave you a good definition of what is meant by a "symbol" in the context of computational theory and even a quote from Fodor's book on the subject. |
#102
|
||||
|
||||
Quote:
I forgot to mention in the first sentence of that post, for the benefit of other readers, that HMHW was basing his no-CTM argument on the homunculus fallacy, the position being that if a computational system requires an interpreting observer to be meaningful, then if the mind was computational it would need such an observer, too, and so would that observer, and so on in an infinite regression. But this is not an issue since the premise is false, which is a good thing, because otherwise it would overturn much of cognitive science. |
#103
|
||||||||||
|
||||||||||
Quote:
Quote:
Quote:
So, symbolically, if f is binary addition, and f' is the other function defined above, we can define two further computations I and O such that: f'x = O*f*Ix You see, you haven't made any headway on the problem of implementation---much the opposite: where before, you couldn't decide between whether f or f' is the computation performed by D, now, you also have to decide whether a suitable system implements O and I! But of course, for any system you claim does that, I can cook up an interpretation such that it computes some other function. Quote:
For what could the 'computation' be, such that it can equally well be regarded as f and f'? After all, both are completely different, if considered as (partial recursive) functions. Writing down algorithms computing either, they, likewise, would come out totally different. They're implemented by different Turing machines, and so on. On any of the usual notions of computation, thus, they'd come out squarely different, and similar only in as much as they have the same domain and codomain. In fact, we're seeing an echo of Newman's famous objection, here: if we're willing to consider these two functions to be the same 'computation', then a 'computation' is just a specification of a domain and a codomain, as we can transform each of the functions defined over them into one another by means of an example such as the one I gave above. So 'computation' would simply be a specification of possible inputs and outputs, without any further notion of which inputs get mapped to what outputs---which of course goes contrary to every notion of what a computation is, as it's exactly which inputs map to what outputs that usually interests us. But that's really getting ahead of ourselves a bit. To satisfy your contention, we'd have to find what's left over once we remove any mapping to inputs and outputs. What remains of the computation once we stipulate that f and f' should be 'the same'. The answer is, of course, not much: just flipped switches and blinking lights. Because if we strip away what individuates the two computations, all that we're left with---all that we can be left with---is just the physical state of the system. But if that's the case, then what we call 'computation' is just the same as the system's physical evolution, i. e. the set of states it traverses. Then, of course, nothing remains of computationalism (that distinguishes it from identity physicalism). Then, you'd have to say that a particular pattern of switches and lights is identical to a 'computation', and, by extension, a mental state. So, if you want f and f' to just be 'different names for the same computation', computationalism looses everything that makes it a distinct theory of the mind, and collapses onto identity physicalism, whose central (and, IMO, untenable) claim is just this: that a given pattern of switches and lights just is a mental experience. Quote:
And of course, the answer is no: whatever that extra bit of brain tissue does, all I can know of it is some activation pattern; and all I can do with that is, again, interpret it in some way. And different interpretations will give rise to different computations. It wouldn't even help to involve the whole agent in the computation. Because even if that were to give a definite computation as a result, the original conclusion would still hold: we need to appeal to an external agent to fix a computation, and hence, can't use computation to explain the agent's capabilities. But moreover, what would happen, in such a case? I give the agent inputs, perhaps printed on a card, and receive outputs likewise. The agent consults the device, interpreting its inputs and outputs for me. So now the thing just implements whatever function the agent takes it to implement, right? But that's only true for the agent. Me, I send in symbols, and receive symbols; but it's not a given that I interpret them the same way the agent does. I give him a card onto which a circle is printed, that they interpret as '0', but by which I, using a different language or alphabet, meant '1'. So this strategy is twice hopeless. Quote:
If so, then again, what you're proposing isn't computationalism, but perhaps some variant of behaviorism or, again, identity physicalism; or maybe an epiphenomenalist notion, where consciousness isn't causally relevant to our behavior, but is just 'along for the ride'. Neither of them sits well with computationalist ideas---either, we again have a collapse of the notion of computation onto the mere behavior of a system, or what's being computed simply doesn't matter. Quote:
Also, none of this threatens the possibility or utility of computational modeling. This is again just confusing the map for the territory. That you can use an orrery to model the solar system doesn't in any way either imply or require that the solar system is made of wires and gears, and likewise, that you can model (aspects of) the brain computationally doesn't imply that the brain is a computer. Quote:
So if you're willing to go that far to 'defend' computationalism, you end up losing what makes it distinct as a theory of the mind. Quote:
Quote:
Last edited by Half Man Half Wit; 05-19-2019 at 05:04 AM. |
#104
|
|||
|
|||
Quote:
All the other hypothetical computations are just noise, nonsense. Hidden in the digits of pi are digits that describe exactly the content of next week's Game of Thrones; that doesn't mean I am going to stare at a circle to try to find this data. (Perhaps I might be better off, the way things are going). Nor am I going to stare at Searle's Wall to find the computation going on inside my own head, or yours. |
|
||||
#105
|
||||
|
||||
They all produce the same pattern of pixels. That's the point.
|
#106
|
||||
|
||||
Quote:
It's actually a bit of a hole in mathematics: the only numbers we know are normal are ones we've specifically picked for that purpose. We don't have a good mechanism to tell whether a given irrational number is normal. |
#107
|
||||||
|
||||||
Sigh! I think we're now talking past each other, but still, I forge ahead ...
![]() ETA: I've included my original replies for the sake of clarity. Quote:
As for "identity physicalism", if your intent is to disparage the mind-brain identity proposition, you may as well remove the "just" in front of it. This is, in my view, one of the central tenets of the computational theory of mind. It's a feature, not a bug! Quote:
The utility of such a system, of course, relies on a user (an observer), and moreover, an observer who speaks English. But that in no way changes the objective nature of the computation itself, nor is it in any way relevant to its computational specification. Quote:
Last edited by wolfpup; 05-19-2019 at 10:04 AM. |
#108
|
|||
|
|||
Quote:
Let's pretend there is an alternate world with two differences: 1 - Light from the sun has a different mix of intensities at different wavelengths so to us things look different 2 - The rods, cones and rgc's in the alien all have shifted sensitivities so that the activation under various conditions matches our cells under comparable conditions (sunny day, cloudy day, etc.) If we assume everything else about the environment and our alien is the same, then, despite differences in the external environment, the internal brain states would be the same. Assuming you agree with the hypothetical, would there be any difference in the conscious experience? It seems like the answer must be no because there is no other signal available to create different conscious experience. |
#109
|
|||
|
|||
If you assume all elements of a hypothetical are identical then the elements are hypothetically identical. Or, at least they will be until you assume them to be otherwise.
|
|
|||
#110
|
|||
|
|||
Quote:
|
#111
|
|||
|
|||
Quote:
When you state it would overturn much of cognitive science, that sounds like something has actually been conclusively figured out. Last edited by RaftPeople; 05-19-2019 at 11:51 AM. |
#112
|
||||
|
||||
Quote:
|
#113
|
|||
|
|||
Quote:
I believe HMHW was stating objections to the idea that computation can give rise to consciousness. You seem to object to that, but at the same time you agree that CTM hasn't made any concrete progress in describing consciousness (nor has any other theory). I don't want to put words in HMHW's mouth, but I don't believe he was rejecting the idea that our brains probably perform symbolic processing in some cases, but rather that it's problematic to try to describe how those same processes can give rise to consciousness. |
#114
|
|||||
|
|||||
Quote:
After all, the important question is merely: are you able to use my device to compute the sum of two inputs? I hope you'll agree that the answer is yes. And furthermore, are you able to use my device to compute f'? And again, the answer is yes. So just as that, as actually computing stuff, it's perfectly clear that the device can be interpreted as computing those functions. Any notion of computation that, for instance, claims that one doesn't compute the sum of two numbers with it, is just a bad one, and really, can only be arrived at by highly motivated reasoning. The device computes the sum (and f') in the same way your pocket calculator does, in the same way your PC does; and that way is all there is to computation. You're trying to throw more computation at this problem to make it go away, but this can only compound it. I've given the demonstration above: no matter what further computations you add, you run into the same problem, multiplied. The only thing you loose is clarity, which allows you to imagine that maybe, once you can't really quite clearly conceive of everything the system does anymore, something's just gonna happen to fix everything. But the above constitutes proof that this isn't so. If you just pipe the output of one computation into another, that doesn't do anything to fix its interpretation. Adding another won't help, either. And so on. While it's easy to eventually get to systems too unwieldy to clearly imagine, it follows from the simple example by induction that no further complifications are going to help, at all---indeed, they can only make matters worse, by introducing more degrees of interpretational freedom. You consider this to be a trivial example, but that's its main virtue: it shows the problem clearly, unlike the case where you just end up appealing to complexity, claiming that something we can't think of will come along to fix everything. But it's completely clear from the example that any system that can be considered to implement one computation can, on equal grounds, be considered to implement many others. This is fatal to computationalism, and your refusal to engage the actual argument isn't going to change that. Anyway, there's one way to demonstrate that you're right: show which computation the device D implements without any need for interpretation. Quote:
Quote:
To answer this objection, the idea was developed that mental states are identical not to physical, but to functional properties, and, on computationalism, particularly computational functional properties. If computationalism thus collapses onto identity physicalism---which it does, if you strip away the distinction between f and f'---computationalism fails to safe physicalism from the threat of multiple realizability. Quote:
Quote:
But that's not what I'm getting at. Rather, I want to point out that what computation we consider the system to perform is something over and above its stimulus-response behavior; it's purely an interpretational gloss over its physical evolution. And that's all that computation comes down to. As such, it can never provide the footing for mental capacities, as it is itself dependent on them. |
|
||||
#115
|
||||
|
||||
Quote:
|
#116
|
|||
|
|||
Quote:
|
#117
|
|||
|
|||
Stepping back and looking at the problem in general, I have two conflicting thoughts:
1 - Every single theory proposed so far seems to have fatal flaws and that is after significant effort by really smart people. This kind of points towards the answer not being based on logic and math. 2 - But, there have been math problems that took centuries to solve, maybe this is one of them. |
#118
|
||||
|
||||
Quote:
That it's possible to have multiple interpretations of the results of your box with switches and lights and apparently impossible to have such multiple interpretations of the computations of an advanced speech-to-text system is absolutely NOT a matter of obfuscation or difficulty, it reflects a qualitative change where the property of the computation itself has become intrinsically fixed. And when I refer to the "system", this must be taken to mean the system in its holistic entirety. It is absolutely irrelevant that you can play this game within individual low-level sub-components like logic gates or even small program modules, and then declare the entire thing to be therefore the product of a large number of arbitrary interpretations! As the complexity of a computing system grows, its qualitative attributes change in fundamental ways, and they can't necessarily be simplistically inferred from its smaller components. This critical principle is embodied in concepts like synergy and emergent properties. Incidentally, my interest in this matter is not abstract philosophical debate but support for CTM and its implication of multiple realizability and thus for the premise that most of (at this point I'll settle for "most of" rather than "all of") the functions of the human brain can be and will be implemented in digital computers. There are many theorists who not only claim that "all of" is appropriate, but that intelligent machines will exceed human capabilities in a general-purpose fashion. I see no reason to doubt them. |
#119
|
||||
|
||||
Quote:
Quote:
The reason that this is possible is that a (single-language) dictionary is just a set of relations; so 'dog' might get explained with terms like 'furry', 'four-legged', 'animal', and so on. So you merely need to map 'dog' to some other term, as well as the explanative terms, and the ones explaining those, and so on; this will keep the web of relations the same, while changing the referents of the individual words. You can explicitly write this down for simple enough languages---indeed, that's exactly what my example above amounts to. But moving to a more complicated language doesn't add anything essentially new; the web of relations gets more complex, a larger graph, but what the nodes stand for still isn't determined by the connections. |
|
||||
#120
|
||||
|
||||
You appear to be coy about directly answering the challenge. Once again you've quoted a small part of what I said but not the meat of it.
To repeat and summarize, the challenge is that you claim that your trivially simple box example suffices as proof of the multiple-interpretations thesis, and that much more complex systems are "even worse" from my standpoint because they have, in effect, a very large number of such boxes, each of which is performing computations subject to equally arbitrary interpretations. I'm not saying that I'm right and you're wrong on what may ultimately be an article of faith, but I am saying that this particular argument is deeply flawed. Again, you ignore the very important point that great increases in complexity result in qualitative (not just quantitative) changes in the properties of computational systems. We call these qualitative changes things like synergy and emergent properties. It isn't magic, though. What's missing from your analysis is any acknowledgment of the tremendous computational implications of the interconnections and data flows between these components, none of which is apparent from any observation of the components themselves, but is only visible when viewing the behavior of the system as a whole. It is here that we observe non-trivial constraints on the range of interpretations of what the computing system is actually doing, to the point that the set of possible interpretations may equal exactly one. I know I certainly don't need to lecture you about the broader fallacies that arise from extending an understanding of trivial components to assumptions about much more complex computational systems, but I can't help but reflect on how this is what led many to proclaim that "computers can't really think" and "they can only do what they're programmed to do", and consequently led Hubert Dreyfus to claim that no computer would ever be able to play better than a child's level of chess. This claim was put to rest when the PDP-10 MacHack program beat him badly way back in 1967*, and we all know the evolution of chess programs to grandmaster status today. Damn, those programmers must be good chess players! ![]() -------- * I had to look up the year because I'd forgotten. Which was when I discovered that Dreyfus had passed away in 2017. I guess I'll have to stop saying nasty things about him now. Pity that he'll never see the amazing things he claimed could never happen. |
#121
|
||||
|
||||
Whereas you just flat ignore my arguments.
Quote:
Your position essentially amounts to an unfalsifiable hope. No matter what I say, how many systems I show where it's patently obvious that their computational interpretation isn't fixed, you can always claim that just beyond that, stuff will start happening. But without any positive argument to that end, you simply haven't made any contenful claim at all. For any example of emergence, you can always point to the microscopic properties grounding the emergent properties. The hydrogen bonds leading to water's fluidity. The rules any member of a flock follows to generate large-scale flocking behavior. You offer nothing of the sort; you point to emergence and complexity as essentially magically bringing about what you need. But there's no reason to believe it will, beyond faith. On the other hand, I have given reason to believe that computation is not an objective notion---by example. It's also straightforward to show how making the system more complex leads to increasing the underdetermination: add a switch, and the number of possible interpretation will grow as the number of possible ways to associate inputs with outputs, with no mitigation in sight. I acknowledge that emergence is a powerful notion. But it's not a one-size-fits-all magical problem solver. The properties at the bottom level determine those at the higher levels; anything else is just magical thinking. Any claim towards the emergence of mind must thus be bolstered with at least a candidate property that might give rise to the mental. Absent this, there simply is no challenge for me to meet, because you've just failed to make any contenful claim whatever. |
#122
|
||||
|
||||
Quote:
Now on this, I have pointed out its flaws, several times, and the second part of that quote is just flat-out wrong. You're describing the phenomenon generally referred to as weak emergence -- properties that may or may not be inferrible from those of the constituent components. Now, while I may not agree with David Chalmers on various issues, at least he has his definitions right on strong emergence [PDF, emphasis mine]: "We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain." I would then point out the example of an electronic calculator that is built from logic gates. It performs calculations, but it's not hard to show that there is nothing about this device that is even remotely "intelligent" in any meaningful sense of the word. It does arithmetic. One could even posit creative multiple interpretations of its results that are other than arithmetic. It's a lot like your box with lights and switches. But in a very true and fundamental sense, systems like Deep Blue and subsequent chess champions were built out of the same kinds of logical devices. So was Watson, the Jeopardy champion. And they are fundamentally and qualitatively different systems from any calculator. Would you like to posit an alternative interpretation of Watson's computational results? You can't, not because it's such a complex system that it's hard to do, it's because it's qualitatively a different kind of system entirely. Last edited by wolfpup; 05-19-2019 at 05:18 PM. |
#123
|
||||
|
||||
Quote:
On the whole, the main idea behind computationalism and other physicalist ideas is essentially a rejection of such notions. So until you can point to any example of strong emergence (and no computer ever will yield one, since the breaking down of their large-scale 'intelligent' behavior into elementary logical operations is kind of their point, and their very computational nature entails the deducibility of this behavior from the lower level), the default position ought to be a strong skepticism. I tend to agree with this: Quote:
Last edited by Half Man Half Wit; 05-19-2019 at 05:39 PM. |
#124
|
||||
|
||||
From the paper of Chalmers you linked to: "Strong emergence, if it exists, can be used to reject the physicalist picture of the world as fundamentally incomplete."
I tend to want to keep the option of physicalism alive. If you call yourself a computationalist, I would have expected that so do you. So do you believe that Deep Blue falsifies physicalism? Last edited by Half Man Half Wit; 05-19-2019 at 05:45 PM. |
|
||||
#125
|
||||
|
||||
Quote:
That state, however, might not be found in any of its discrete components. It might only be found in some vague network of interconnections between distant neurons, or the data paths between software modules, or new data structures that the system itself developed, any of which might have been dynamically established (the latter perhaps in a manner unknown and unpredicted by the designers). Actually a very simple example with Watson was simply the result of its extensive training exercises. In a real sense, no one fully understood what the hell was going in there in terms of the detailed evolution of its database as it was being trained, but the system was gradually getting smarter. I note BTW that Chalmers also cites consciousness as the only known example (in his view) of strong emergence. I'll leave you guys to fight that out, since you objected to that idea so strongly! ![]() |
#126
|
||||
|
||||
Quote:
Quote:
Quote:
So computationalism can never include strong emergence. That would mean to both believe that a computer could simulate a brain, leading to conscious experience, and that a computer simulation of a brain would lack certain aspects of a real mind (the strongly emergent ones). I have no qualms with Chalmers; he puts forward a consistent position by acknowledging the rejection of physicalism his belief in strong emergence entails. I still think he's wrong, but there's a substantial question to whether he is. Last edited by Half Man Half Wit; 05-20-2019 at 12:27 AM. |
#127
|
|||
|
|||
I was thinking about functions other than consciousness performed by the brain and I'm wondering how they fit in with this argument against computation being the basis of consciousness.
For example, consider circadian rhythm computations: Relative to it's environment (sensory input, other functional components of the brain), it serves a specific purpose. Despite the fact that your box example applies when this function is viewed in isolation, when viewed relative to it's surrounding environment, the specific purpose becomes realized. Why is consciousness different than the circadian rhythm function? Can't we say that consciousness is just a particular transformation relative to it's surrounding environment (the inputs into the consciousness function and the outputs from the function)? |
#128
|
|||
|
|||
Quote:
Take "external agency". As far as I know I never said anything about an external agency. I'm saying that the cognitive calculation itself understands the symbols it uses, and that the way it processes those symbols is largely or wholly deterministic based on the state of the cognitive process itself at that moment in time. And that fact means that you can copy a cognition even if you don't have any idea how it works, simply by ensuring that your copy has the same cognitive calculation operating on comparable hardware with an identical copy of the 'symbols'. You may not have any idea what those symbols you just copied over mean, but the copy of the cognitive process knows what to do with them, because it's an identical copy of the original cognitive process that knows what to do with them. It's worth noting that it doesn't matter how the cognition works - so long as you accurately copy the cognition process and memory state, the copy will 'work'. You have to copy the whole cognition process and memory state, of course - if the person in question has a soul and you forget to copy the soul then your copy will fail to operate properly to the degree the soul was necessary for operation. But as long as you copy all the parts and memory correctly you're all good. You do seem to be very interested in talking about reverse-engineering the cognition and the difficulties in doing so, and while that's certainly an interesting topic, this thread is specifically about copying cognition. And you don't have to understand how something works to copy it, as long as you know enough to not fail to copy over something important. |
#129
|
||||
|
||||
Quote:
So in my box example, stimuli come in the form of switch flips, and responses are given in the form of lights either lighting up or not. No matter what computation I take the system to perform, this stimulus-response behavior remains the same; so the interaction with the environment is blind towards anything else. The same goes for things like the circadian rhythm. Nothing is, ultimately, being computed; it's merely the physical evolution in response to causal factors that triggers, say, time-sensitive hormone release. Picture one of these old wind-up kitchen timers: they don't compute the time to ring, the spring merely winds down after a certain time (response), provided it's been wound up (stimulus). |
|
|||
#130
|
|||
|
|||
Quote:
But, how do we know consciousness isn't just a set of transformations that just happen to help the machine successfully respond to it's environment in the same way that the circadian rhythm function does? It may feel to us like it's something more, but maybe it's not. |
#131
|
||||
|
||||
Quote:
Quote:
To that, you reply (if I understand you correctly) that so what, to the mind produced by the brain, what computation is being performed is perfectly definite, it's just that an outside observer can't tell which. But that's circular: in order for the mind to fix the computation like that, it would first have to be the case that the brain, indeed, gives rise to that mind; but for that, it must implement computation M. So before it can be the case that the 'cognitive calculation' itself understands the symbols it uses, it must be the case that the brain performs that 'cognitive calculation' (i. e., implements M). So your reply appeals to the brain implementing M in order to argue that it implements M. Quote:
I have no qualms with an exact physical replica of my brain being conscious in the same way as I am, if that's what your saying. |
#132
|
||||
|
||||
Quote:
|
#133
|
||||
|
||||
Some parting thoughts (maybe).
![]() Quote:
This is roughly what happens when logic gates are assembled into a digital computer. The business of being able to "infer" from component properties what the properties of the aggregate system will be is really rather nebulous and arbitrary, and consequently so is the distinction between weak and strong emergence, IMO. One might readily infer that since logic gates switch signals according to logical rules, it's reasonable to expect that the resulting system would be an ace at doing binary arithmetic. But is it reasonable to infer on that same basis that those same logic gates would be the foundation for a system capable of playing grandmaster-level chess, long believed to be the exclusive domain of a high caliber of human intelligence? Or one that would beat Ken Jenning at Jeopardy? If so, Hubert Dreyfus and a whole following of like-minded skeptics sure as hell didn't infer it! ![]() Quote:
And the basis for your bizarre conclusion appears to be the belief that any computation requires an external agent to fix an interpretation -- a belief that I maintain has already been disproved by the simple fact that all interpretations that are consistent with the computational results are all exactly equivalent. The claim that the mind cannot be computational because of the "external agent" requirement is a futile attempt to parallel the homunculus argument as it's sometimes applied to the theory of vision. Clearly, however, vision is actually a thing, so somewhere along the line the attempt to prove a fallacy has gone off the rails. Likewise with your claim about the computational aspects of cognition. It's exactly the homunculus fallacy, and it's a fallacy because computational results are intrinsically objective -- that is to say, they are no more and no less intrinsically objective than the semantics attached to the symbols. Your first paragraph here is also frustrating to read. It is, at best, just one step removed from the old saw that "computers can only do what they're programmed to do", which is often used to argue that computers can never be "really" intelligent like we are. That's right, in a way: the reality is that computers can be a lot more intelligent than we are! The fact that in theory a sufficiently advanced intellect or another computer, given all the code and the data structures (the state information) in Watson, could in fact predict exactly what Watson would do in any situation is true, but it's also irrelevant as a counterargument to emergence because it's trivially true: all it says is that Watson is deterministic, and we already knew that. But here's the kicker: I would posit that exactly the same statement could be made in principle about the human brain. In any given situation and instant in time one could in theory predict exactly how someone will respond to a given stimulus. There's merely a practical difficulty in extracting and interpreting all the pertinent state information. Unless you don't believe that the brain is deterministic -- but that would be an appeal to magic. This is aside from issues of random factors affecting synaptic potential, and changes therein due to changes in biochemistry, and all the other baggage of meat-based logic. But those are just issues of our brains being randomly imperfect. That a defective computer may be unpredictable is neither a benefit nor an argument against computational determinism. A final point here, for the record, is that in retrospect the digression about strong emergence was a red herring. The kinds of things I was talking about are better described as synergy, which is synonymous with weak emergence, if one wants to bother with the distinction at all. The impressive thing about Watson is not particularly the integration of its various components -- speech recognition, query decomposition, hypothesis generation, etc. -- as these are all purpose-built components of a well-defined architecture. The impressive thing is how far removed the system is from the underlying technology: the machine instructions, and below that, the logic gates inside the processors. The massively parallel platform on which Watson runs is very far removed from a calculator, yet in principle it's built from exactly the same kinds of components. The principle here is, as I said earlier, that a sufficiently great increase in the quantitative nature of a system's complexity leads to fundamental qualitative changes in the nature of the system. Among other things, dumb, predictable systems can become impressive intelligent problem-solvers. This, in my view, is the kind of emergence that accounts for most of the wondrous properties of the brain, and not some fundamentally different, mysterious processes. Consciousness is very likely just another point on this continuum, but we've thrashed that one to death. Marvin Minsky used to say that consciousness is overrated -- that what we think of as our awesome power of self-awareness is mostly illusory. Indeed, we obviously have almost no idea at all of what actually goes on inside our own heads. IMHO Chalmers' physicalism arguments about it are just philosophical silliness. Where does consciousness reside? Nowhere. It's just a rather trivial consequence of our ability to reason about the world. Last edited by wolfpup; 05-20-2019 at 02:25 PM. |
#134
|
||||
|
||||
Quote:
|
|
|||
#135
|
|||
|
|||
Quote:
Quote:
A given mind, a given cognition, knows what it means by a given symbol/encoding. Let's call that cognition M. M happens to function in such a way that red tennis shoes are encoded with a specific code which I'll refer to as RTS. Other cognitions might interpret the RTS code differently, perhaps interpreting it to mean 'real-time strategy game', but that's not a problem for cognition M - M knows that RTS means 'red tennis shoes'. Why does RTS mean red tennis shoes? Because it does. That's the symbol M uses for that. There's absolutely no part of this that I perceive to be a problem. If you are seeing a problem here, then I posit either you are talking about something different than this, or there's a problem with your logic. Quote:
Which means that, theoretically speaking, you absolutely can create an exact physical replica of your brain within the simulation within the computer. So you bet your bunions that your cognition is digitizable. |
#136
|
||||
|
||||
No. It's not the same thing. There is a continuity of brain wave activity. That is satisfactory for me. If your saying that the transfer could happen during "sleep" I'd balk.
__________________
Go wherever you can be And live for the day It's only wear and tear -IQ |
#137
|
||||||||||||||||||
|
||||||||||||||||||
Quote:
Quote:
Quote:
This isn't something I've made up, you know. But I don't think heaping on more cites would help any, seeing how you've already not bothered to address the one I provided. Quote:
But that's rather something of a tangent in this discussion. The basic point is that, of course it's reasonable to think of chess playing as being just the same kind of thing as binary arithmetic. After all, that's what a computer program for playing chess is: a reduction of chess playing to performing binary logical operations. Really complicated ones, but again, that's the definition of a difference merely in quantity, not quality. Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Look, I get that the successes of modern computers look impressive. But for anything a computer does, there's a precise story of how this behavior derives from the lower level properties. I might not be able to tell the story of how Watson does what it does, but I know exactly what this story looks like---it looks exactly the same as for a pocket calculator, or my device above. Describing the functional components of a computer enable us to see exactly what computers are able to do. Turing did just that with his eponymous machines; ever since, we have exactly known how any computer does what it does. There's no mystery there. That's the sort of story you'd have to tell to make your claim regarding the emergence of consciousness have any sort of plausibility. But instead, you're doing the exact opposite: you try to use complexity to hide, not to elucidate, how consciousness works. You basically say, we can't tell the full story, so we can't tell any story at all, so who knows, anything might happen really, even consciousness. It's anyone's guess! Quote:
Nevermind that even if there were some sort of qualitative difference, this would still not make any headway at all against my argument (that, I'll just point out once again for the fun of it, you still haven't actually engaged with)---at best, the resulting argument would be something like: the simple example system doesn't possess any fixed computational interpretation; however, qualitatively novel phenomena emerge once we just smoosh more of that together. So maybe some of these qualitatively novel phenomena are just gonna solve that problem in some way we don't know. That is, even if you were successful in arguing for qualitative novelty in large-scale computational systems, the resulting argument would at best be a fanciful hope. Quote:
Quote:
Quote:
Quote:
Quote:
All of this is just a massive case of confusing the map for the territory. What you're saying is exactly equivalent, for example, to saying that there are certain symbols such that they have only one objectively correct meaning. |
#138
|
|||
|
|||
Quote:
Quote:
1) positing that minds can't possibly work and are entirely fictional, which I believe can be dismissed as absurd based on observations, or 2) positing that we can't reverse engineer brains operation through external observation, which is irrelevant and off topic because we don't have to know how they work to copy them if we copy them at the physical level to the smallest excruciating detail. Because of your persistent use of undefined technical terms I'm not quite sure about which of these you are doing, but either way I don't care - you can't prove brains aren't copyable either way. Quote:
From the perspective of the copy, the duplication is exact, down to the smallest detail. Every neuron and chemical and electron is in place, acting exactly like their equivalents in the physical world. It's essentially the 'prosthetic neuron replacement' scenario from earlier in the thread - the prosthetic neurons (and everything else) are simulated entities, but they replicate the functionality of the things they replace perfectly. Simulations seek to replicate the behavior and outcomes of what they simulate. The more accurate the simulation, the more accurate the outcomes are to the real thing. Here we theoretically posit a perfect simulation of the physical reality that the brain (and the body that houses it) exist in. Basically the Matrix, except the brain cells are inside the simulation too. And presuming a materialist real universe, there is no coherent and informed argument that the simulation couldn't be accurate to the finest detail - including the behavior of the simulated people, driven by their simulated brains and the minds contained within them. |
#139
|
|||
|
|||
HMHW, I'm curious which definition of computation (or computationalism if it's a different animal) you are assuming?
While reading up on these topics, it seems there are multiple definitions, this is one thing I was reading: http://www.umsl.edu/~piccininig/Comp...hy_of_Mind.pdf |
|
||||
#140
|
||||
|
||||
Quote:
Hence, what computation a physical system performs---whether it performs any computation at all---isn't an objective matter of fact about the physical system. You can't say the device I proposed performs binary addition; you can only say that you can interpret it as such. But then, a dismissal of the computational theory of mind follows immediately. Computationalism holds that brains give rise to minds via performing the right sort of computation. But if it's the case that brains don't perform any computation at all, unless they are interpreted in the right way, then that whole thing collapses. So whether or not a computational system assigns symbols to concepts, or what have you, is entirely beside the point. The point is that there's no such thing as a computational system absent an interpretation of a physical system as a computational system. Quote:
The rest of your post unfortunately doesn't really have any connection to anything I've written so far, so I won't reply to it for now, for fear of introducing yet more confusion in the attempt of explaining myself. Really, I implore you, if any of this is still unclear, go back to my example. If you don't understand something, ask. But don't just go on attacking points I never made. Quote:
Quote:
Last edited by Half Man Half Wit; 05-21-2019 at 12:38 AM. |
#141
|
|||
|
|||
This is crazy.
Quote:
Maybe, somewhere out there in an infinite universe, there is an exact replica of my laptop in which (purely by chance) it is f' that causally affects the laptop screen in order to display exactly the same symbols - but this freak laptop, if it exists, is so far away that it is way beyond my personal light cone, probably more than a googolplex metres away. What possible relevance does the computation f' have to anything in the real world? Last edited by eburacum45; 05-21-2019 at 09:38 AM. |
#142
|
|||
|
|||
Quote:
This means that the fact that you can interpret its data and outputs sixteen thousand different ways is utterly irrelevant. It doesn't matter at all. It's completely inconsequential. It has no bearing on the discussion whatsoever. Why? Because it doesn't matter how you interpret the data; it matters how the data interprets the data. And the way the data interprets the data is determined by the arrangement of the data - and at any given moment there's only one arrangement of the data. Which means there's only one interpretation the mind is going to use, and that's the only one that matters. Now you'll note that in the paragraph above I'm brazenly lumping both the stored data and the 'running program state' under the umbrella term 'data'. This is because as far as the copying process is concerned, it *is* all data, and can be copied and exactly reproduced in a simulation. And when this happens the simulated mind will have the exact same interpretations of its own data as the original did - it will perceive itself the same way the original does, and react the same the original does. it copies all the traits and behaviors and processes and interpretations of the original because it's an exact copy. Does the copy (or the original) do "computation"? The fuck if I know; I don't know what you mean by the term. What I do know, though, is that if one does it so does the other, and vice versa. The two function identically. Including using the same identical operating processes and self-interpretation. |
#143
|
||||||||
|
||||||||
I've been away from the board for the past day due to events of actual life, but let me respond briefly to that last volley.
Quote:
I liked the Chalmers definition for directly contradicting your claim that an emergent property must have visible elements in the underlying components, a claim that I regarded as nonsense. Reading further in the Chalmers paper, however, I don't agree with him on ALL his characterizations of emergent properties, particularly that what he calls "strong emergence" must be at odds with physicality. So my two statements in context are in no way contradictory, but you get three points and a cookie for highlighting them that way. ![]() Quote:
Quote:
Quote:
The past thirty years have witnessed the rapid emergence and swift ascendency of a truly novel paradigm for understanding the mind. The paradigm is that of machine computation, and its influence upon the study of mind has already been both deep and far-reaching. A significant number of philosophers, psychologists, linguists, neuroscientists, and other professionals engaged in the study of cognition now proceed upon the assumption that cognitive processes are in some sense computational processes; and those philosophers, psychologists, and other researchers who do not proceed upon this assumption nonetheless acknowledge that computational theories are now in the mainstream of their disciplines. Quote:
Quote:
Quote:
Quote:
|
#144
|
||||||||||||
|
||||||||||||
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Again, I'm not the only one thinking that. 'You could do it by computer' is often used as the definition for weak emergence that doesn't introduce anything novel whatsoever, because it's just so blindingly obvious how the large-scale phenomena follow from the lower-level ones in the computer's case. That doesn't mean computers can't surprise you. Even though the story of how they do what they do is conceptually simple, it can be a bit lengthy, not to mention boring, to actually follow. But surprise is no criterion for qualitative novelty. I have been surprised by my milk boiling over, but that doesn't mean that a qualitatively new feature of the milk emerged. Quote:
You, on the other hand, have provided no such basis for your claim that consciousness emerges in the same way. Indeed, you claim that no basis such as that can be given, because emergence basically magically introduces genuine novelty. That you give computers as an example of that, where it's exactly the case that the emergent properties have 'visible elements in the underlying components', is at the very least ironic. |
|
|||
#145
|
|||
|
|||
Quote:
1) You have something going on in your head. Nobody knows how it works. 2) "Computation" (whatever that means), is necessary for you to have a mind. If what's going on in there isn't "computation", then it doesn't instantiate a mind and you don't have a mind. 3) Not only does there have to be "computation", but it has to be "definite". Having a materialist causal process that definitely only has one eigenstate is not sufficient to qualify as "definite" - apparently you need to also be possible to be able to unambiguously reverse engineer the internal mechanisms from the outputs alone. 3) Your argument is that the process going on inside your head is in fact not "definite", and thus it's not a qualifying sort of "computation", and thus you haven't got a mind. QED and such. Is that a fair restatement of your position? As a side note, I agree that the mental calculation isn't "definite", and I think it could be proven that no calculations whatsoever are "definite". For every black box you might examine, the function could be either "f" or "f but it also is quietly recording its output to an internal log that is never outputted or referred to." You cannot ever prove that this is not happening inside the black box, so no calculation, process, or anything else is "definite". |
#146
|
||||
|
||||
I'm not. I'm making the argument I've repeated here more often than I care to count, and won't repeat again. If you don't follow it at some point, I'm happy to help.
|
#147
|
|||
|
|||
Quote:
The problem with your argument, in case you weren't noticing my subtle reductio ad absurdum, is that to whatever degree your argument applies to theoretical machine intelligences, it also applies equally to human brains. I specifically mentioned your human brain in case you're a solipsist, but the hard truth is that in arguing that no minds are possible anywhere ever. You are seriously throwing out the baby with the bathwater here. |
#148
|
|||||
|
|||||
Quote:
Quote:
Quote:
Note that in defining the Turing machine, Turing himself was untroubled by any notion of an external interpreter. Indeed he explicitly made the distinction between this type of machine exhibiting the determinacy condition, which he called an automatic machine or "a-machine", and the choice machine in which an external agent specified the next state. But your box is an a-machine, whose computations involve no such external agent. Quote:
Of course "surprise" by itself isn't a criterion for much of anything, but surprise in the sense that properties that we explicitly denied could emerge from certain processes, like high intelligence or strong problem-solving skills, if and when they actually emerge does mean that we have to re-evaluate our beliefs and assumptions. It also means that those properties were not observed in the underlying processes, or at least were in no way obvious. Quote:
|
#149
|
||||
|
||||
Quote:
I have seen earthworms exhibit strong indications of self-awareness, which leads me to conclude that this thing I have that is the “I” inside is probably closely related to the fundamental survival instinct of living things. It may be structurally different among the variety of living things, but its evolutionary contribution should be more than obvious. An elaborate computer may be able to produce all of the apparent indications, but until we have a firm grasp on what that nebulous concept means, we cannot be absolutely certain that it genuinely possesses a property that we know to be self-awareness. In fact, based on what I have observed with respect to other creatures, it is not at all obvious that self-awareness is an emergent property of intelligence. More complex programming or more elaborate system design may make it seem convincing that a device is self-aware, but it may just be an astoundingly good simulation. Hell, my HP33C might have had some rudimentary form of self-awareness that was not very similar to mine or to the earthworm's but nonetheless present. Perhaps we ought to be more circumspect when throwing away that old cell phone because it could have had a soul, of sorts. |
|
|||
#150
|
|||
|
|||
Quote:
The reason we throw away phones isn't because they're not self-aware - it's because we don't care whether or not they are because they're not human. It's the same reason we're okay with eating beef. Last edited by begbert2; 05-22-2019 at 01:16 PM. Reason: I make so many typos I probably have no mind |
Reply |
Thread Tools | |
Display Modes | |
|
|