FAQ |
Calendar |
![]() |
|
![]() |
#151
|
|||||||
|
|||||||
Quote:
Quote:
If the former, you don't get the sort of emergence you claim, because then, the emergent facts follow from the base facts---and for every emergent fact, you can state the precise way it follows from the base facts. Thus, that's the story you need to at least provide some plausibility argument for in order to have your claim of the emergence of consciousness be contentful. If the latter, then the base facts don't determine the emergent facts. Then, of course, consciousness just might come 'round at some point. Somehow, for basically no reason. But also, it's no longer the case that fixing the physical facts ('particles, fields, and their arrangement') fixes all the facts, and physicalism is wrong. It's not a matter of believing Chalmers or not. He's not stating this out of the blue; he's just clearly articulating what the options are. There's no middle ground. You either eat the cake, or keep it. Quote:
And really, you shouldn't start accusing others of avoiding things. Quote:
Quote:
Quote:
Quote:
Nobody can be in any doubt about that. There's no mystery about how chess-playing emerges in computers, and that's precisely because we understand how the lower-level properties lead to the chess-playing behavior. Getting a computer to play chess means we understand how that sort of behavior comes about, whereas even if a computer were conscious, we still wouldn't have the faintest clue about how consciousness comes about. We have no idea how consciousness reduces to some Boolean formula, some program, some algorithm. Blithely positing that 'it emerges' simply sweeps our ignorance under the rug, where the only thing we can do that has any hope of getting us ahead on the problem is meeting it head-on. That's where we need to re-evaluate our beliefs and assumptions. |
#152
|
|||
|
|||
Then you're not speaking english.
And seriously, the fact that you have repeatedly refrained from responding to the fact that your argument disproves your own mind seems telling to me - specifically it's telling me that you have no refutation for the fact your argument is disproven but you're carrying on anyway because you'd rather keep arguing it because arguing is fun. Last edited by begbert2; 05-22-2019 at 02:04 PM. Reason: dammit what is with the typos today? I'm going to lunch. |
#153
|
||||
|
||||
Quote:
It is more analogous to involuntary bodily functions like the beating heart, filtering liver and rippling GI tract that keep the biological creature operative. Self-awareness seems to be some sort of instinct-based adjunct to the simple or complex neurological function of the beast, and it is not even clear whether it enhances that functionality in any way other than to supply impetus to continue. |
#154
|
||||
|
||||
Quote:
So fine, one last time:
There. I hope that makes things clear. It should, in particular, be obvious now that nothing about the impossible of minds in general is implied; merely, that minds, which clearly do exist, are not computational in nature, that is, are not produced by the brain implementing a certain computation. Now, I believe you won't give up on the 'but CM just interprets itself'-issue so easily. In that case, I have good news for you, because you can prove the existence of god! For, god is an omnipotent being. Omnipotence entails the possibility to create gods. Consequently, god may just create him/hir/herself. And that's it! Logically, this is perfectly analogous to CM interpreting B as computing CM. Last edited by Half Man Half Wit; 05-22-2019 at 02:31 PM. |
|
||||
#155
|
||||
|
||||
It doesn't appear possible with the technology we currently have. We have really kick-ass computers, but they are not self-aware. So, there is no reason to assume that your "consciousness" would survive death. Even if we had the technology necessary to convert RNA based memory into 1's and 0's, it would just be data. It wouldn't be a person.
"Ghost in a Shell" had science approaching it in a different way. They perfected something they called a "Cerebral Salvage", which was implanting a human brain in a completely artificial body and integrating them so that they worked together just like our brains and organic bodies work together. |
#156
|
||||
|
||||
What the hell is “RNA based memory”?
|
#157
|
|||
|
|||
Quote:
It really comes down to what is actually meant by self-awareness - I generally think of it as that the entity in question is continuously aware of its ongoing calculative processes, and has the capability to interpret and make decisions based on them. The sticking point, of course, is the "aware" part; we humans have a 'in the driver's seat' perspective on the world that we're hard-pressed to explain or even describe. I find myself wondering whether even a simple computer program that assesses inputs, outputs, and its own data storage might, from its own perspective, have a 'drivers's seat view'. I mean, we do program them to - they have a main execution loop that is constantly running back and forth through its event handlers and dealing with inputs as they come. The only think lacking is that the main loop doesn't have access to the data handled by the event handlers - but at what level would the 'driver's seat' manifest? One could theorize that the self-contained execution loop gives rise to the 'drivers's seat view' but that the 'view' included the data handlers as well. This is, of course, all frivolous theorizing, but one thing that stands out to me is that I've never heard any other explanation for where or how that 'drivers's seat view' originates. I mean sure, there's bullshit about souls, but that just kicks the can down the street - how do souls do it, mechanically speaking? Whether material or spiritual the process is being carried out somehow. And now, on to attempting to see if HMHW's explanation makes some sense to poor old simple me, and to see if it accomplished the literally impossible and draws a distinction between the function of a physical brain and its functionally exact copy. |
#158
|
||||
|
||||
Quote:
I think that both you and wolfpup should take a look at Integrated Information Theory, which posits that consciousness is either due to or constituted by the information distributed among the parts of a system in a certain way. In a simplified sense, integrated information can be thought of as the amount of information the whole of a system has over and above that which is locked in its parts. IIT then postulates that this is what consciousness either correlates with or comes down to. Now, I think in doing so, IIT is massively begging the question. But that doesn't matter right now. What's important is that IIT gives a scientifically respectable reductive account of consciousness---it shows us exactly how (what it purports to amount to) consciousness emerges from the simpler parts, and gives us objective criteria for its presence. So, IIT provides the sort of story that's needed to make wolfpup's emergence reasonable. Furthermore, on IIT, consciousness isn't a functional property of the brain. It's a property relating to the correlations between the brain's parts, or regions. So functionalism isn't really the sort of obvious truth begbert2 seems to take it to be, either. More interestingly, perhaps, is that on IIT, computationalism comes out straightforwardly false. While a brain has a high degree of integrated information, the typical computer, due to its modular architecture, has very little to none of it. So a computer implementing the simulation of a brain won't lead to conscious experience, even if the brain it simulates does. This suffices to falsify many of the seemingly 'obvious' claims in this thread and shows that other options are possible. |
#159
|
||||
|
||||
Quote:
Quote:
Or to put it in terms of computational processess, if you have two physically identical machines, one of which is running Quicksort and one of which is running Stoogesort, then the only thing that is making the Quicksort one the Quicksort one is the fact it's running the Quicksort one. It is distinct from the Stoogesort one (among other things it's a lot faster), but the difference in what it is is indeed a result of it being what it is. None of this is self-contradictory. And honestly, even if you were right about computationalism being impossible (which you're not), that doesn't mean that brains can't be copied to computers - it would just mean that it's equally impossible for existing computers to be computational! Which would mean that existing computers are still exactly as capable as brains are to support minds, because none of them are the thing that's impossible. Which makes sense, because existing computers aren't impossible and thus if you prove that something is impossible to do, they're not doing it. Quote:
Quote:
None of this is analogous to nothing creating something from nothing, none of it is circular, and none of it is self-contradictory. |
|
|||
#160
|
|||
|
|||
It's referring to various studies that show you can take RNA from an animal that has learned a task, inject it into a different animal (same species) and successfully test for the trained task in the second animal.
There are other studies that don't specifically say anything about RNA, but have shown the ability for isolated individual purkinje cells to learn temporal sequences, which means our biology is using multiple methods for learning. |
#161
|
|||
|
|||
Quote:
Don't try to impune upon me any particular cognitive model. All my position requires are the following three facts: 1) Brains produce minds in the real world. 2) Brains are entirely physical (and thus, mechanical) in their operation*. 3) Computers with sufficient memory and processing ability can accurately simulate physical objects and thier mechanical behaviors. That's it. That's all I need to prove that it's theoretically possible to emulate minds - you may have to emulate everything in the world with even a tangential physical effect on the brain (which could include the entire universe!), but that's still theoretically possible, and thus you provably can theoretically emulate minds on a computer. Proven, that is, unless you can disprove one of the three premises. And note that I don't care how the brains create minds. It doesn't matter. I don't care. It's utterly irrelevant. All that matters to me is that brains produce minds at all. * Actually I don't even require the minds to exist entirely in the physical realm - they just have to exist in some realm that has reasonably consistent mechanistic rules. Which isn't really much to ask, because nothing can possibly work or even consistently exist in a realm without reasonably consistent rules. |
#162
|
|||
|
|||
Quote:
|
#163
|
||||
|
||||
Quote:
Quote:
Obviously, P1 cannot interpret P1 as implementing C1, as to do so, it would have to already be implementing C1 to do the interpreting. Quote:
Quote:
Last edited by Half Man Half Wit; 05-22-2019 at 04:43 PM. |
#164
|
||||
|
||||
Quote:
|
|
|||
#165
|
|||
|
|||
Quote:
Does your argument rely on something impossible? Seriously, when we talk about calculations being a 'black box' where you can't tell what's going on inside it, it doesn't mean that the internal calculations are experiencing multiple different program states simultaneously like schroedinger's cat. That's bizarre. Quote:
Quote:
Yeah, kind of not feeling that. |
#166
|
|||
|
|||
That's less a disproof of my argument and more a disproof of IIT - or your interpretation if IIT, as the case may be.
|
#167
|
||||
|
||||
Quote:
Quote:
Quote:
|
#168
|
|||
|
|||
When the word "mind" or "cognition" is used, is it assumed that consciousness is always included within those terms, or is "mind" and "cognition" ever assumed to be just the processing less conscious experience?
|
#169
|
|||
|
|||
Quote:
Or you could note that the thread(s) of execution do run through all those parts each time, constantly, moving in and out of all the layers and back again. It's really a question of how (and where) the 'seat of consciousness' manifests, and how it ''feels like' from the inside - and whether the actual implementation being partitioned interferes with that at all. (Assuming it manifests in the first place, of course.) Quote:
Survival is just a goal. A goal that is conducive to there being future generations, yes, but the thing that has the goal is what I'm curious about. The thing that has the awareness of the situation to see threats and avoid them. How does it work? Between "magic" and "program execution loop", I'm thinking "program execution loop". Quote:
Once you've got your Matrix for the minds to live in, the minds ought to be able to get all the inputs they're accustomed to just fine, presuming you designed the Matrix properly and with sufficient detail. Though of course you'd eventually want to tweak a few things - the whole goal here was to let the minds live forever after all. So once you have things all properly uploaded into a simulation you'll probably want to tweak a few things rather than accurately simulating the ravages of time and all. This would of course cause the minds in the simulation to diverge in thought and action from the originals, but really, wasn't that the point all along? |
|
|||
#170
|
|||
|
|||
I can't speak for anyone else, but I believe all the terms all refer to the "seat of consciousness" - the "I" in "I think therefore I am". When we look out at the world through our eyes, the mind/cognition/consciousness is the thing doing the looking.
|
#171
|
|||
|
|||
Quote:
I never liked the whole zombie argument before, but I think I see what that argument is getting at. |
#172
|
|||
|
|||
I'm of the personal opinion that it's incoherent to think that full human reaction and interaction can be achieved by a so-called "zombie" - the mere act of observing the world, interpreting the world, and reacting to the world in an ongoing and consistent way requires that the entity be aware of its environs, itself, and its opinions and memories, in the same way that a car needs some sort of engine to run. (Could be a hemi, could be a hamster wheel, but it's gotta be something.)
|
#173
|
|||
|
|||
Quote:
But what I was picturing in my mind was that the operational attributes of the brain (e.g. modeling the environment, making decisions towards a goal, etc.) can be independent of the conscious experience, if the conscious experience is just a layer above that maybe only influences deciding on the goal (for example). Responding to your post: There are examples where people seem to operate correctly in their environment but don't have awareness (e.g. sleep walking). |
#174
|
|||
|
|||
Quote:
Quote:
Brains be complicated, yo. |
|
|||
#175
|
|||
|
|||
Quote:
Even though it does seem like we can react to our environment and make choices, it sure seems like that choice process is much more of a formula highly constrained by the machinery (due to nature+nurture), as opposed to an open ended consciousness based selection process. The only reason I would make a different decision (than the ones I typically make) is if there was an explicit input identifying the fact that a non-standard decision is being targeted/tested and therefore I should choose that path to prove it can be done (based on my internal motivation that would push me towards even evaluating that condition). |
#176
|
|||
|
|||
Quote:
|
#177
|
|||
|
|||
I'm not passing a value judgement on it, I'm just saying that my analysis is leading me to believe that the patterns that drive us seem like they are stronger and further below the conscious level than I previously assumed.
|
#178
|
|||
|
|||
Quote:
Though, to say on-topic, complicated doesn't mean uncopyable. It just means that when we replicate all the neuron-states and chemical soups, we might not know what they're doing even as we move them over. One fun (though debatably ethical) thing we could do once we were emulating the brains would be to sick an AI on them and have it make multiple copies of the emulated brains and selectively tweak/remove physical elements and compare ongoing behavior afterward. We could find out which aspects of our brain's physicality are necessary for proper mind function real quick. (Hmm, removing all the blood had an adverse effect. Guess we needed that. Next test!) Given enough whittling we might be able to emulate a mind without emulating the whole brain - just the parts and processes that actually matter. (With the brain matter's physical weight or the skull enclosing it being possible superfluous factors that could be removed, for example.) In this way we might be able to emulate minds more efficiently than fully emulating all the physical matter in the vicinity. |
#179
|
|||||
|
|||||
Quote:
It would be fair to ask, for course, what the developers saw in it, and why they built it that way. The answer is that they saw only a general-purpose learning mechanism, not something that bore any of the specific primordial traits of what they hoped to achieve. Just the same way as they built a massively parallel general-purpose computer system to run it on. What actually came together was something qualitatively new, and something that many had believed was at least another decade away. Quote:
But if minds then have the capacity to interpret things (as they seem to), they have a capacity that can't be realized via computation, and thus are, on the whole, not computational entities.Ignoring the incorrect assertions about Putnam that I dispelled earlier, waffling over "category errors" is disingenuous and meaningless here. The position of CTM isn't that computational theories help us understand the mind in some vague abstract sense; the position of CTM is that the brain performs computations, period, full stop -- as in the basic premise that intentional cognitive processes are literally syntactic operations on symbols. This is unambiguously clear, and you unambiguously rejected it. The cite I quoted in #143 says very explicitly that "the paradigm of machine computation" became, over a thirty-year period, a "deep and far-reaching" theory in cognitive science, supporting Fodor's statement that it's hard to imagine any kind of meaningful cognitive science without it, and that denial of this fact -- such as what you appear to be doing -- is not worth a serious discussion. Quote:
Quote:
Quote:
|
|
||||
#180
|
||||
|
||||
Quote:
I suspect self-awareness/the survival instinct are analogous to something like hair: it is there, many of us are rather fond of it, it has its uses, but it does not actually do anything. It is an interesting feature. But a feature of what? Hair is a simple thing that is a result of follicular activity. Self-awareness seems to be a rather complex feature that probably arises from disparate sources (some most likely chemical), and may not be localized (just like hair). The point is, it is not evident that it actually does anything. Kind of like data tables which, in and of themselves are not active (in the way that program code is active), but our mental processes take note of it and adjust their results to account for it. So, would self-awareness be a valuable feature for intelligent machines? Perhaps. Then again, maybe not. If we just want them to do what we need them to do, strict functionality might be the preferable design strategy. Unless uncovering the underlying nature of self-awareness is the research goal, in which case, they are probably best confined to a laboratory setting. |
#181
|
||||
|
||||
Will it be held against me if I don't have time right this moment to read the whole thread but want to ask a question that might have been answered already? Here goes: define exactly what you mean by downloading my consciousness, or I simply have no way to reply.
Thanks. |
#182
|
|||||||||
|
|||||||||
Quote:
Quote:
Quote:
Regardless, this is not my interpretation of IIT, but one of its core points. Take it from Christoph Koch: Quote:
Quote:
You can exactly prove what sort of tasks a neural network is capable of learning (arbitrary functions, basically), you can, at every instant, tell exactly what happens at the fundamental level in order for that learning to happen, and you can tell exactly how it performs at a given task solely by considering its low-level functioning. This is an exact counter-example to the claims you're making. For a good explanation of the process, you might be interested in the 'Deep Learning'-series on 3Blue1Brown. The people who built AlphaGo didn't just do it for fun, to see how it would do. They knew exactly what was going to happen---qualitatively, although I'd guess they weren't exactly sure how good it was going to get---and the fact that they could have this confidence stands in direct opposition to your claims. Nobody just builds a machine to see what's going to happen; they build one precisely because their understanding of the components allows them to say so with pretty good confidence. Sure, there's no absolute guarantee---but as you said, surprise isn't a sufficient criterion for emergence. Sometimes a bridge breaks down to the surprise of everybody; that doesn't entail that bridges have any novel qualitative features over and above those of bricks, cement, and steel. Quote:
You've missed it at least two times now, and you'll probably miss it a third time, but again: an orrery is a helpful instrument to model the solar system, and one might get it into one's head that the solar system itself must be some giant world-machine run on springs and gears; but the usefulness of the orrery is completely independent of the falsity of that assertion. Quote:
Quote:
Quote:
Which is a separate issue from the fact that such emergence, obviously, doesn't occur in computers, which are the poster children of reducibility. I'm still waiting for you to tell me what my example system computes, by the way. I mean, this is usually a question with a simple answer, or so I'm told: a calculator computes arithmetical functions; a chess computer chess moves. So why's it so hard in this case? Because, of course, using the same standard you use in everyday computations will force you to admit that it's just as right to say that the device computes f as that it computes f'. And there, to any reasonable degree, the story ends. |
#183
|
||||
|
||||
Quote:
Or like any animal. Last edited by Voyager; 05-23-2019 at 01:06 AM. |
#184
|
||||
|
||||
I'd figure that an interrupt system is more likely. If you touch a hot stove, I don't think your brain is polling your nerve endings. They interrupt your thoughts - while also causing involuntary actions, just as interrupts can do.
|
|
||||
#185
|
||||
|
||||
Quote:
|
#186
|
||||
|
||||
Quote:
So in that sense, the integrated information is a physical quantity that's present in a system, but not, in general, in a system that's simulating that system---just like the mass of a black hole isn't present in a simulation of said black hole (which I gather is rather a good thing). |
#187
|
||||
|
||||
In an effort to keep it brief, I'm omitting those points where I would simply be repeating myself.
Quote:
This can ONLY be interpreted as "no cognitive processes at all can be computational", since ANY such computation would, according to your claim, require an external interpretive agent. If true, that would invalidate CTM in its entirety. Could you possibly have meant that? Why, yes, you totally could: "The CTM is one of those rare ideas that were both founded an dismantled by the same person (Hilary Putnam). Both were visionary acts, it's just that the rest of the world is a bit slower to catch up with the second one." Only when challenged on it are you now offering creative re-interpretations. But perhaps you'd like to take on the creative challenge of re-interpreting what you meant by CTM having been "dismantled". Quote:
Quote:
My answer is that it's one that transforms 0110011 into 0100010, which is objectively a computation by definition, since it is, after all, a Turing machine exhibiting the determinacy condition -- even if I don't know what the algorithm is. Your answer would appear to be that it's not a computation at all until it's been subjectively understood by you and assigned a name. I think Turing would disagree. I think the core of the problem here is that you're confusing "computation" with "algorithm". But as Turing so astutely showed, the question of what a "computation" is, in the most fundamental sense, is quite a different question from asking what class of problem is being solved by the computation. |
#188
|
|||||||
|
|||||||
Quote:
Quote:
That does in no way imply that no process that goes on in the brain is computational. I've been careful (to no avail, it seems) to point out that my argument threatens solely the interpretational abilities of minds: they can't be computational. Using these interpretational powers, it becomes possible to assign definite computations to systems---after all, I use the device from my example to compute, say, sums. Furthermore, even systems that aren't computational themselves may be amenable to computational modeling---just as well as systems that aren't made of springs and gears may be modeled by systems that are, like an orrery, but I suspect where these words are, you just see a blank space. Quote:
Quote:
Also, I have been clear that my arguments don't invalidate the utility of computational modeling: Quote:
Quote:
Quote:
But of course, I know what you mean to argue. So let's specify a computation in full: say, the Turing machine has an input set consisting of all seven bit strings, and, to provide an output, traverses them right to left, replacing each block '11' it encounters with '10'. Thus, if produces '0100010' from '0110011', or '1000000' from '1111111', or '0001000' from '0001100'. This is indeed a fully formally specified, completely determinate computation. You'll note it's of exactly the same kind of thing as my functions f and f'. So why does a Turing machine execute a definite computation? Simple: a Turing machine is a formally specified, abstract object; its vehicles are themselves abstract objects, like '1' and '0' (the binary digits themselves, rather than the numerals). But that's no longer true for a physical system. A physical system doesn't manipulate '1' and '0', it manipulates physical properties (say, voltage levels) that we take to stand for or represent '1' or '0'. It's here that the ambiguity comes in. If you were to build the Turing machine from your example, then all it could do would be to write 1 or 0 (now, the numerals, not the binary digits) onto its tape. Anybody familiar with the Arabic numbers could then grasp that these ink-blots-on-paper are supposed to mean '1' and '0' (the binary digits, again). But, somebody grown up on Twin Earth that's identical to ours except for the fact that 1 means '0' and 0 means '1' would, with equal claim to being correct, take the Turing machine to implement a wholly different computation; namely, one where every '00' string is replaced by a '01'-string. That's why I keep asking (and also, why you keep neglecting to answer): what computation is implemented by my example device? You're backed into a corner where you'd either have to answer that it's a computation taking switch-states to lamp-states, in which case the notion of computation collapses to that of physical evolution, or agree with me that it can be equally well taken to implement f and f'. Although I note that you seem to have shifted your stance here somewhat---or perhaps, it hasn't been entirely clear from the beginning: you've both argued that the two computations are the same (which amounts to accepting they're both valid descriptions of the system, just equivalent ones, which starkly conflicts with you singling out a function of the same class as individuated computation in this post), and that multiple interpretations become, for reasons vaguely tied to 'emergence', less likely with increasing complexity. So which is it, now? Perhaps for one last time, let me try and make my main point clear in a different way. Symbols don't intimate their meanings on their own. Just being given a symbol, or even a set of symbols with their relations, doesn't suffice to figure out what they mean. This is what the Chinese Room actually establishes (it fails to establish that the mind isn't computational): given a set of symbols (in Chinese), and rules for manipulating these, it's in principle possible to hold a competent conversation; but it's not possible to get at what the symbols mean, in any way, shape, or form. Why is that the case? Because there's not just one thing they could mean. That must be the case, otherwise, we could just search through meanings until we find the right one. But it just isn't the case that symbols and rules to manipulate them, and relationships they stand in, determine what the symbols mean. But it's in what their physical properties mean that physical systems connect to abstract computation. Nothing else can be right; computations aren't physical objects, and the only relation between the physical and the abstract is one of reference. So just the way you can't learn Chinese from manipulating Chinese letters according to rules, you can't fix what computation a system performs by merely having it manipulate physical properties, or objects, according to rules. An interpretation of what those properties or objects mean, what abstracta they refer to, is indispensable. But this reference will always introduce ambiguity. And hence, there is no unambiguous, objective computation associated with a physical system absent it being interpreted as implementing said computation. Last edited by Half Man Half Wit; 05-23-2019 at 08:20 AM. |
#189
|
||||
|
||||
We don't have to "disprove" your claim anymore than one has to "disprove" the existence of God. You are making a claim, but it has never been done. The onus of proof is on you.
|
|
|||
#190
|
|||
|
|||
Quote:
Is that transformation always a computation regardless of the nature of the machinery that performed it? |
#191
|
|||
|
|||
Quote:
If you have a closed calculation machine, a so-called "black box", the black box has internal workings that in fact have an unambiguous process they follow and an unambiguous internal state, whether or not you know what it is. We know this to be the case because that's how things in the real world work. And your knowledge of the internal processes is irrelevant to their existence. Or put another incredibly obvious way, brains worked long before you thought to wonder how they did. So. Now. Consider this unambiguous process and unambiguous internal state. Because these things actually exist, once you have a black box process in a specific state, it will proceed to its conclusion in a specific way, based on the deterministic processes inside advancing it from one ambiguous internal state to the next. The dominos will fall in a specific order, each caused by the previous domino hitting them. And if you rewound time and ran it again, or made an exact copy and ran it simultaneously from the same starting state, it will proceed in exactly the same way following the same steps to the same result. (Unless the system is designed to introduce randomity into the result, that is, but that's a distracting side case that's irrelevant to the discussion at hand. The process is still the same unambiguous process even if randomity perturbs it in a different direction with a different result. And I'm quite certain based on observation that randomity has a negligible effect on cognition anyway.) So. While you think that your example includes a heisenberg uncertainty machine that has schroedinger's internals which simultaneously implement f and f' and thus hold varying internal states at the same time, in actual, non-delusional fact if you have a specific deterministic machine that has a specific internal state that means that it *must* be in the middle of implementing either f or f', and not both. This remains true regardless of the fact that you can't tell which it's doing from eyeballing the output. Obviously. Your argument is entirely reliant on the counterfactual and extremely silly idea that things can't exist without you knowing about them. Sorry, no. The black box is entirely capable of having a specified internal process (be it f, f', or something else) without consulting you for your approval. Your argument is that the "interpretation" of an outside observer has some kind of magical impact on the function of a calculative process, despite there being no possible way that the interpreter's observation of the outside of the black box can impact what's going on inside it. Or at least that's what you've been repeatedly saying your argument is. I can only work with what you give me. Quote:
He then goes on to insist that he's not saying that consciousness is a magic soul, before clarifying that he's saying that physical matter has a magic soul that's called 'consciousness'. I'm sure he's a very smart fellow, but loads and loads of smart fellows believe in magic and souls and gods. Smart doesn't mean you can't have ideological beliefs that color and distort your thinking. So yeah. To the degree that IIT claims that physical matter has magical soul-inducing magic when arranged in the correct pattern to invoke the incantation, I understand it better than it does, because I recognize silliness when I see it. You think I'm misstating his position? First you have to "build the computer in the appropriate way, like a neuromorphic computer"... and then consciousness is magically summoned from within the physical matter as a result! But if you build the neuromorphic computer inside a simulation "it will be black inside", specifically because it doesn't have physical matter causing the magic. So take heart! You're not the only person making stupid nonsensical arguments. You're not alone in this world! |
#192
|
|||
|
|||
Quote:
P2: If an emulation is sufficiently detailed and complete, that emulation can exactly duplicate properties and behaviors of what it's emulating. P3: It's possible to create a sufficiently detailed and complete emulation of the real world. C1: It's possible to create an emulation that can exactly duplicate properties and behaviors of the real world. (P2 and P3) C2: It's possible to create an emulation that can exactly duplicate cognition. (C1 and P1) So there's my proof. It's logically sound, so the way to refute it is to attack the premises. Here's how that goes: Refutation of P3: "Emulation can't simulate reality, and never will!" Refutation of P2: "Emulation is impossible!" Refutation of P1: "Cognition is magic! WOOO!" Choose whichever you like. (Christoph Koch chooses P1.) Last edited by begbert2; 05-23-2019 at 02:27 PM. Reason: typo |
#193
|
|||
|
|||
Quote:
What it's not, is the digitizing of the whole physical person a la Tron. I mean you could do that, but it really just amounts to destroying the original person as part of the process of scanning them for the information to make the digital copy. (And putting that copy in a fancy sci-fi outfit in the process.) Of course the Tron scenario allows for some confusion/obfuscation of whether some kind of immortal soul left over from the now-disintegrated person somehow locates and attaches itself to the simulated digital avatar, which honestly just seems rife with implementation problems. (Especially since they were just trying to copy an apple.) |
#194
|
||||
|
||||
It depends on what question you're asking. If you're concerned about whether a device is Turing equivalent, you need to understand what it's actually doing. But when computation is viewed simply as the output of a black box, it's always reducible to the mapping of a set of input symbols to a set of output symbols. So I take the view that any black box that deterministically produces such a mapping for all combinations of inputs has to be regarded as ipso facto computationally equivalent to any other that produces the same mapping, without reference to what's going on inside it. Of course, the mechanisms involved may be trivial, like a simple table lookup, that may not provide any insights into the nature of computation and may not be Turing equivalent.
|
|
||||
#195
|
||||
|
||||
Quote:
Information, unlike mass, must be present in a simulation of something with that information. If you simulate telephone traffic, say, you don't need switch hardware but you do need the contents of the calls. This is simulation, not modeling, where you can describe the traffic mathematically without the information in them. That's information, not integrated information of course. I did read at your link but found it less than interesting. |
#196
|
||||
|
||||
Quote:
Quote:
I see a "blank space" where you provide your ruminations about CTM being somehow related to "computational modeling" because it's so egregiously wrong. Please note the following commentary from the Stanford Encyclopedia of Philosophy. They refer to the family of views I'm talking about here as classical CTM, or CCTM, to distinguish them from things like connectionist descriptions. CCTM is precisely what Putnam initially proposed and was then further developed into a mainstream theory at the forefront of cognitive science by Fodor (bolding mine): According to CCTM, the mind is a computational system similar in important respects to a Turing machine ... CCTM is not intended metaphorically. CCTM does not simply hold that the mind is like a computing system. CCTM holds that the mind literally is a computing system.It then goes on to describe Fodor's particular variant of CCTM: Fodor (1975, 1981, 1987, 1990, 1994, 2008) advocates a version of CCTM that accommodates systematicity and productivity much more satisfactorily [than Putnam's original formulation]. He shifts attention to the symbols manipulated during Turing-style computation.This is of course exactly correct. The prevalent view of CTM that was first advanced by Fodor and then became mainstream is that many cognitive processes consist of syntactic operations on symbols in just the manner of a Turing machine or a digital computer, and he further advanced the idea that these operations are a kind of "language of thought", sometimes called "mentalese". The proposition is that there is a literal correspondence with the operation of a computer program, and it has no relationship to your suggestions of "modeling" or of doing arithmetic in your head. Quote:
Quote:
Let me re-iterate one of my previous comments. Our disagreement seems to arise from your conflation of "computation" with "algorithm". The question of what a "computation" is, in the most fundamental sense, is quite a different question from what problem is being solved by the computation. Your obsession with the difference between your f and f' functions is, at its core, not a computational issue, but a class-of-problem issue. |
#197
|
||||
|
||||
Quote:
Of course you have to agree on the input and output symbols, and they must be consistent across computational systems. This doesn't seem to be a requirement for HMHW's view of interpretation. In other words, Lincoln was wrong - a horse does have five legs if you interpret the tail as a leg. |
#198
|
|||
|
|||
Quote:
It sounds like you are saying that if HMHW's box performs the transformation you listed (0110011 into 0100010) then that is considered a computation, right? Meaning that HMWH's box may not be a Turing machine, it may just be a circuit that performs that transformation, but regardless of how it arrives at the correct answer, the transformation is considered a computation, right? |
#199
|
||||||||||||||
|
||||||||||||||
Quote:
Quote:
The internal wiring is wholly inconsequential; all it needs to fulfill is to make the right lights light up if the switches are flipped. There are various ways to do so, if you feel it's important, just choose any one of them. Quote:
Quote:
Quote:
Quote:
Quote:
You'll probably want to argue that 'inside the simulation', objects are attracted by the black hole, thus, it has mass. For one, that's a quite strange thing to believe: it would entail that you could create some sort of pocket-dimension, with its own physics removed from ours, merely by virtue of shuffling around a few voltages; that it would be the case, even though the black hole's mass has no effects in our dimension, there suddenly now exists a separate realm where mass exists that has no connection to ours save for your computer screen. In any other situation, you'd call that 'magic'. Holding that the black hole in the simulation has mass is exactly the same thing as holding that the black hole I'm writing about has mass. The claim that computation creates consciousness is the claim that, whenever I'm writing 'john felt a pain in his hip', there is actually a felt pain somewhere, merely by virtue of me describing it. Because that's what a simulation is: an automated description. A computation is a chain of logical steps, equivalent to an argument, performed mechanically; there's no difference to writing down the same argument in text. The next step in the computation follows from the previous one in just the same way as the next line in an argument follows from the prior one. Quote:
Quote:
Fodor indeed had heterodox views on the matter; but, while he's an important figure, computationalism isn't just what Fodor says it is. After all, it's the 'computational theory of the mind', not 'of some aspects of the mind'. Or, as your own cite from the SEP says, Quote:
Quote:
Quote:
Quote:
Because that's computation in the actual, real-world sense of the term, absent any half-digested SEP articles. My claim is nothing but: because I can use the system to compute f (or f'), the system computes f (or f'). There is nothing difficult about this, and it needs loops of motivated reasoning in order to make it into anything terribly complex or contentious. Quote:
So no, the algorithm has no bearing on whether the system implements f or f'. The reinterpretation of the symbolic vehicles it uses entails that any algorithm for computing one will be transformed into one for computing the other. Where it says 'If S12 = '1' Then ...' in one, it'll say 'If S12 = '0' Then ...' in the other, with both referring to the switch being in the 'up' position, say. |
|
|||||||
#200
|
|||||||
|
|||||||
Quote:
You made it very explicit that your argument depends on a blatantly false assumption, and it's not misconstruing things to point out that the realities of the function of black boxes and internal processes are what show that your assumption is blatantly false. Quote:
Quote:
Quote:
I will readily concede that I don't see why you think interpretation is even slightly relevant to anything. The box itself isn't effected, and you can't prove that calculation is internally inconsistent just by eyeballing some output. (Especially not with the massive false assumption your argument seems to hinge on.) Quote:
Quote:
Quote:
|
Reply |
Thread Tools | |
Display Modes | |
|
|