Going back to my laptop- it is only performing one ontologially significant computation in order to display the symbols on its screen; we’ll call that f if you like. We know that is the one it is performing, because it is the one it is supposed to do. This is the teleological function of my computer, the one it has been designed to do. Now you state that it is also performing f’, and that also seems to be true- but that computation does not affect the display on the screen at all - the two events are not causally linked.
Maybe, somewhere out there in an infinite universe, there is an exact replica of my laptop in which (purely by chance) it is f’ that causally affects the laptop screen in order to display exactly the same symbols - but this freak laptop, if it exists, is so far away that it is way beyond my personal light cone, probably more than a googolplex metres away. What possible relevance does the computation f’ have to anything in the real world?
Minds are self-interpreting. That’s kind of the whole point - self-awareness, and all. A mind interprets its own memories, it’s own data, its own internal states. The ‘function’, the complex pattern of dominoes that are constantly bumping up against one another, it’s arranged in such a way that it examines its own data and interprets it itself.
This means that the fact that you can interpret its data and outputs sixteen thousand different ways is utterly irrelevant. It doesn’t matter at all. It’s completely inconsequential. It has no bearing on the discussion whatsoever.
Why? Because it doesn’t matter how you interpret the data; it matters how the data interprets the data. And the way the data interprets the data is determined by the arrangement of the data - and at any given moment there’s only one arrangement of the data. Which means there’s only one interpretation the mind is going to use, and that’s the only one that matters.
Now you’ll note that in the paragraph above I’m brazenly lumping both the stored data and the ‘running program state’ under the umbrella term ‘data’. This is because as far as the copying process is concerned, it is all data, and can be copied and exactly reproduced in a simulation. And when this happens the simulated mind will have the exact same interpretations of its own data as the original did - it will perceive itself the same way the original does, and react the same the original does. it copies all the traits and behaviors and processes and interpretations of the original because it’s an exact copy.
Does the copy (or the original) do “computation”? The fuck if I know; I don’t know what you mean by the term. What I do know, though, is that if one does it so does the other, and vice versa. The two function identically. Including using the same identical operating processes and self-interpretation.
I’ve been away from the board for the past day due to events of actual life, but let me respond briefly to that last volley.
It might, were it not for the fact taking those quotes out of context to rob them of their intended meaning merely reduces the discussion to a game of “I Win teh Internets!”.
I liked the Chalmers definition for directly contradicting your claim that an emergent property must have visible elements in the underlying components, a claim that I regarded as nonsense. Reading further in the Chalmers paper, however, I don’t agree with him on ALL his characterizations of emergent properties, particularly that what he calls “strong emergence” must be at odds with physicality. So my two statements in context are in no way contradictory, but you get three points and a cookie for highlighting them that way.
If you’re referring to your Mark Bedau quote, that wasn’t a cite, it was a cryptic one-liner lacking either context or link.
Nor am I “merely saying it”. I and others have provided arguments, you just don’t like them. And speaking of published literature, if you read the cognitive science literature you’ll find that CTM is a rather more substantive contribution to the science than a “theory of caloric”. See next comment.
If, in order to support your silly homunculus argument about computation, you have to characterize one of the most respected figures in modern cognitive science as a misguided crackpot for advancing CTM theory and having the audacity to disagree with you, I hope you realize what that does to your argument. This about sums it up:
The past thirty years have witnessed the rapid emergence and swift ascendency of a truly novel paradigm for understanding the mind. The paradigm is that of machine computation, and its influence upon the study of mind has already been both deep and far-reaching. A significant number of philosophers, psychologists, linguists, neuroscientists, and other professionals engaged in the study of cognition now proceed upon the assumption that cognitive processes are in some sense computational processes; and those philosophers, psychologists, and other researchers who do not proceed upon this assumption nonetheless acknowledge that computational theories are now in the mainstream of their disciplines. https://publishing.cdlib.org/ucpressebooks/view?docId=ft509nb368&chunk.id=d0e360&toc.depth=1&toc.id=d0e360&brand=ucpress
What I find disconcerting is that in order to support this argument, you have to discredit arguably one of the most important foundations of cognitive science, with which it is directly at odds.
Sure, but that’s just the point. The external agent in your switches and lights example is just assigning semantics to the symbols. The underlying computation is invariant regardless of his interpretation, as I keep saying. All semantic assignments that work all describe the same computation, and if you dislike me constantly repeating that, another way of saying it is the self-evident fact that no computation is occurring outside the box, and the presence of your interpretive agent doesn’t change the box.
This is a complete misrepresentation of the significance of Turing’s insight. Its brilliance was due the fact that it reduced the notion of “computation” (in the computer science sense) to the simplest and most general abstraction, stripping away all the irrelevancies, and allowed us to distinguish computational processes from non-computational ones. It “enable us to see exactly what computers are able to do” only in the very narrow sense of being state-driven automata that are capable of executing stored programs. It tells us absolutely nothing about the scope and the intrinsic limits of what those automata might be able to achieve in terms of, for example, problem-solving skills demonstrating intelligence at or beyond human levels, self-awareness, or creativity. As I said, philosophers like Dreyfus concluded in the 60s that computers would never be able to play better than a child’s level of chess, and AFAIK Searle is still going on about his Chinese Room argument proving that computational intelligence isn’t “real” intelligence. Turing machines have been known since 1948, but the debate about computational intelligence rages on in the cognitive and computer sciences.
My statement is a philosophical one, related to what I said just above, observing that a sufficient change in complexity enables new qualities (capabilities) not present in the simpler system. You’re focused on the Turing equivalence between simple computers and very powerful ones, while I’m making a pragmatic observation about their qualitative capabilities. As I just said above, Turing equivalence tells us absolutely nothing about a computer’s advanced intelligence-based skills. As computers and software technology grow increasingly more powerful, we are faced again and again with the situation that Dreyfus faced for the first time in the 1967 chess game with MacHack, that of essentially saying “I’m surprised that a computer could do this”. Sometimes even AI researchers are surprised (the venerable Marvin Minsky actually advised Richard Greenblatt, the author of MacHack, not to pursue the project because it was unlikely to be successful). Do you not see why this is relevant in any discussion about computational intelligence, or the possible evolution of machine consciousness?
So which one is the ‘ontologically significant’ one in my example above? What makes a computation ontologically significant?
The computation f’ is linked to the device in exactly the same way as f is. I’m not sure how you mean ‘causally linked’, because what causally determines whether certain lights light up is the way the switches are flipped, but there’s no relevant difference between the two.
The relevance is that I can use the device to compute f’, in exactly the same way as you can use it to compute f. Even at the same time, actually.
No matter that nobody knows how such self-interpretation could possibly work, this just doesn’t address the issue at all (again). On computationalism, whether a system instantiates a mind depends on what computation it implements. If there’s no computation, there’s no mind to self-interpret, or conjure up pixies, or what have you. So what computation a system performs needs to be definite before there even is a mind at all. But my argument, it it’s right, shows that there exactly isn’t any fact of the matter regarding what computation a system performs.
The contradiction (as Chalmers highlights) is that the sort of emergence (that doesn’t follow from the fundamental-level properties) you require is in contradiction to both computationalism and physicalism, so you simply can’t appeal to both in your explanation of the mind without being inconsistent. You want the emergent properties to not follow from the fundamental ones? Then you can’t hold on to computationalism. It’s that simple.
Sorry, I thought I had given the link earlier (the Chalmers paper, however, cites it—approvingly, I might add).
I like how you get all in a huff about my disagreement with Fodor (whom I never characterized as a crackpot or misguided; at the time, caloric was a perfectly respectable paradigm for the explanation of the movement of heat, simply reflecting an incomplete scientific knowledge, just like the computational theory is now), but yourself think nothing about essentially painting Dreyfus as a reactionary idiot. So I guess the main determining factor in whether or not a dead philosopher is worthy of deference is whether you agree with them?
Exactly. You keep merely saying that, without any sort of argument whatsoever. So then, at least tell me, which computation does my device implement? Is it f or f’? Is it neither? If so, then how come I can use it to compute the sum of two numbers? If both describe the same computation, then what is it that’s being computed? What is that computation? How, in particular, does it differ from merely the evolution of the box as a physical system? You might recall, though one could think you’ve so far just somehow missed it despite my repeat attempts to point it out, that in that case, computationalism just collapses to identity physicalism.
Exactly, again. Which means that there’s no fact of the matter regarding what the box computes.
Turing’s insight was precisely that one could reduce the computation of anything to a few simple rules on symbol-manipulation. Anything a computer can do, and ever will do, reduces to such rules; if you know these rules, you can, by rote repeated application, duplicate anything that the computer does. There is thus a clear and simple story from a computer’s lower-level operations to its gross behavior. That removes any claim of qualitative novelty for computers.
Again, I’m not the only one thinking that. ‘You could do it by computer’ is often used as the definition for weak emergence that doesn’t introduce anything novel whatsoever, because it’s just so blindingly obvious how the large-scale phenomena follow from the lower-level ones in the computer’s case.
That doesn’t mean computers can’t surprise you. Even though the story of how they do what they do is conceptually simple, it can be a bit lengthy, not to mention boring, to actually follow. But surprise is no criterion for qualitative novelty. I have been surprised by my milk boiling over, but that doesn’t mean that a qualitatively new feature of the milk emerged.
It tells us exactly what we need to know: how these skills derive from the lower level properties of the computer. This is what motivated the Church-Turing thesis: Turing showed how simple rote application of rules leads to computing a large class of functions; thus, one may reasonably conjecture that they suffice to compute anything that can be computed at all. Hence, there is a strong heft to the claim that computation emerges from these lower-level symbol manipulations.
You, on the other hand, have provided no such basis for your claim that consciousness emerges in the same way. Indeed, you claim that no basis such as that can be given, because emergence basically magically introduces genuine novelty. That you give computers as an example of that, where it’s exactly the case that the emergent properties have ‘visible elements in the underlying components’, is at the very least ironic.
Are you, or are you not, making the following argument:
You have something going on in your head. Nobody knows how it works.
“Computation” (whatever that means), is necessary for you to have a mind. If what’s going on in there isn’t “computation”, then it doesn’t instantiate a mind and you don’t have a mind.
Not only does there have to be “computation”, but it has to be “definite”. Having a materialist causal process that definitely only has one eigenstate is not sufficient to qualify as “definite” - apparently you need to also be possible to be able to unambiguously reverse engineer the internal mechanisms from the outputs alone.
Your argument is that the process going on inside your head is in fact not “definite”, and thus it’s not a qualifying sort of “computation”, and thus you haven’t got a mind. QED and such.
Is that a fair restatement of your position?
As a side note, I agree that the mental calculation isn’t “definite”, and I think it could be proven that no calculations whatsoever are “definite”. For every black box you might examine, the function could be either “f” or “f but it also is quietly recording its output to an internal log that is never outputted or referred to.” You cannot ever prove that this is not happening inside the black box, so no calculation, process, or anything else is “definite”.
I’m not. I’m making the argument I’ve repeated here more often than I care to count, and won’t repeat again. If you don’t follow it at some point, I’m happy to help.
I was sentence-by-sentence restating the post to which I was replying. Which sentence did I restate incorrectly?
The problem with your argument, in case you weren’t noticing my subtle reductio ad absurdum, is that to whatever degree your argument applies to theoretical machine intelligences, it also applies equally to human brains. I specifically mentioned your human brain in case you’re a solipsist, but the hard truth is that in arguing that no minds are possible anywhere ever.
You are seriously throwing out the baby with the bathwater here.
Only if you believe Chalmers. I proposed above that novel emergent properties can develop in the interconnections and/or states of lower-level components that were not found in any form in the components themselves.
I note that you’re avoiding my cited quote about the important role of computational theory in cognitive science. Also, Dreyfus developed a very bad reputation in the AI and cognitive science communities early on that he was never able to shake, despite apparently some level of vindication of a few of his ideas in later years. I can tell you first-hand that contempt for Dreyfus is still prevalent in those communities.
What is that computation, you ask? Let’s ask a hypothetical intelligent alien who happens to know nothing about number systems. The alien’s correct answer would be: it’s performing the computation that produces the described pattern of lights in response to switch inputs. How do we know that this is a “computation” at all and not just random gibberish? Because it exhibits what Turing called the determinacy condition: for any switch input, there is deterministically a corresponding output pattern. Whether we choose to call it a binary adder or the alien calls it a wamblefetzer is a matter of nomenclature and, obviously, a distinction of utility.
Note that in defining the Turing machine, Turing himself was untroubled by any notion of an external interpreter. Indeed he explicitly made the distinction between this type of machine exhibiting the determinacy condition, which he called an automatic machine or “a-machine”, and the choice machine in which an external agent specified the next state. But your box is an a-machine, whose computations involve no such external agent.
“You could do it by computer” as a synonym for “trivial” sounds like something Dreyfus would have said!
Of course “surprise” by itself isn’t a criterion for much of anything, but surprise in the sense that properties that we explicitly denied could emerge from certain processes, like high intelligence or strong problem-solving skills, if and when they actually emerge does mean that we have to re-evaluate our beliefs and assumptions. It also means that those properties were not observed in the underlying processes, or at least were in no way obvious.
None of us are in a position to conclusively explain consciousness. But it certainly seems plausible to me that it’s nothing more than our perception of the world turned inward on itself, so that any being sentient enough to have thoughtful perceptions about the world will possess a corresponding level of self-awareness. I suspect that at some point in the future when we have general-purpose AI whose awareness of the world includes awareness of self, we’ll get into furious semantic battles over whether machine consciousness is “real” consciousness. It will certainly be different from ours because it won’t have the influences of biological senses or instincts. Ultimately Marvin Minsky may be proved right in considering the whole question a relative non-issue in the context of machine intelligence.
Here is the logical problem, though. Since we do not have a definitive understanding of what self-awareness is, we can only interpret the evidence. That a machine could exhibit self-awareness is only superficial.
I have seen earthworms exhibit strong indications of self-awareness, which leads me to conclude that this thing I have that is the “I” inside is probably closely related to the fundamental survival instinct of living things. It may be structurally different among the variety of living things, but its evolutionary contribution should be more than obvious.
An elaborate computer may be able to produce all of the apparent indications, but until we have a firm grasp on what that nebulous concept means, we cannot be absolutely certain that it genuinely possesses a property that we know to be self-awareness.
In fact, based on what I have observed with respect to other creatures, it is not at all obvious that self-awareness is an emergent property of intelligence. More complex programming or more elaborate system design may make it seem convincing that a device is self-aware, but it may just be an astoundingly good simulation.
Hell, my HP33C might have had some rudimentary form of self-awareness that was not very similar to mine or to the earthworm’s but nonetheless present. Perhaps we ought to be more circumspect when throwing away that old cell phone because it could have had a soul, of sorts.
Every computer with an operating system has at least some form of “self-awareness” in at least the most basic, literal sense of the term. The whole point of an operating system is to keep track of and manage what the computer itself is doing.
The reason we throw away phones isn’t because they’re not self-aware - it’s because we don’t care whether or not they are because they’re not human. It’s the same reason we’re okay with eating beef.
Given the lower level properties and their arrangement, do the emergent properties follow necessarily? If I fix the base facts, do I fix the emergent facts, or are there more facts left to fix?
If the former, you don’t get the sort of emergence you claim, because then, the emergent facts follow from the base facts—and for every emergent fact, you can state the precise way it follows from the base facts. Thus, that’s the story you need to at least provide some plausibility argument for in order to have your claim of the emergence of consciousness be contentful.
If the latter, then the base facts don’t determine the emergent facts. Then, of course, consciousness just might come 'round at some point. Somehow, for basically no reason. But also, it’s no longer the case that fixing the physical facts (‘particles, fields, and their arrangement’) fixes all the facts, and physicalism is wrong.
It’s not a matter of believing Chalmers or not. He’s not stating this out of the blue; he’s just clearly articulating what the options are. There’s no middle ground. You either eat the cake, or keep it.
I’ve never doubted the role of computational theory in cognitive science. The problem is just that the model doesn’t imply the character of the thing modeled, so computationalism just amounts to a category error. Just because you can model the solar system with an orrery doesn’t mean gravity works via wires and gears.
And really, you shouldn’t start accusing others of avoiding things.
And that’s supposed to be arguing for what, exactly? Because computer science people have sour grapes with Dreyfus, you can’t criticize computationalism…? Seriously, I can’t figure out what your aim is in bringing this up again and again.
OK, so it’s actually just the physical evolution that you want to call ‘computation’ for some reason. And me computing sums using the device isn’t computation. In that case, as I pointed out, you’re not defending computationalism, but identity theory physicalism. Combined with your notion of strong-but-not-really-that-strong emergence, you don’t really have any consistent position to offer at all.
Well, thanks!
It means we were wrong, that’s it. It’s hard to see how chess, or Jeopardy, translated to simple Boolean logical rules. But the fact that you can do it by computer simply demonstrates that it does; that what next move to make in a game of chess is equivalent to tracking through some ungodly-huge Boolean formula that one could explicitly write down.
Nobody can be in any doubt about that. There’s no mystery about how chess-playing emerges in computers, and that’s precisely because we understand how the lower-level properties lead to the chess-playing behavior. Getting a computer to play chess means we understand how that sort of behavior comes about, whereas even if a computer were conscious, we still wouldn’t have the faintest clue about how consciousness comes about. We have no idea how consciousness reduces to some Boolean formula, some program, some algorithm. Blithely positing that ‘it emerges’ simply sweeps our ignorance under the rug, where the only thing we can do that has any hope of getting us ahead on the problem is meeting it head-on.
That’s where we need to re-evaluate our beliefs and assumptions.
And seriously, the fact that you have repeatedly refrained from responding to the fact that your argument disproves your own mind seems telling to me - specifically it’s telling me that you have no refutation for the fact your argument is disproven but you’re carrying on anyway because you’d rather keep arguing it because arguing is fun.
That is really stretching the meaning of the concept, though. The operating system is an automoron that provides an environment for useful processes to run in. It really has zero understanding of what the processes are doing, only that they have specific needs that have to be met and that shit has to be cleaned up when they gracefully exit or explode all over the place.
It is more analogous to involuntary bodily functions like the beating heart, filtering liver and rippling GI tract that keep the biological creature operative. Self-awareness seems to be some sort of instinct-based adjunct to the simple or complex neurological function of the beast, and it is not even clear whether it enhances that functionality in any way other than to supply impetus to continue.
I didn’t, because there is no way, shape or form in which anybody could interpret my argument as having any such implication.
So fine, one last time:
[ol]
[li]Computationalism requires that, in order to give rise to a mind M, a brain B needs to implement a certain computation C[sub]M[/sub].[/li][li]A brain is a physical system.[/li][li]A computation is a formal object that we can take to be, without loss of generality, defined by a partial function over natural numbers (a definition that is equivalent to the one via Turing machines).[/li][li]B must implement C[sub]M[/sub] in an objective, mind-independent way, as otherwise, whether or not B implements C[sub]M[/sub] depends on whether B implements C[sub]M[/sub], and we lapse into vicious circularity.[/li][li]Binary addition is such a function. So is my function f’.[/li][li]In order for B to implement C[sub]M[/sub], it must more broadly be possible for a given physical system D to implement some certain computation C.[/li][li]Implementing a computation C by means of D entails that one can use D to compute C (just as, for example, one uses a pocket calculator to calculate, or a chess computer to compute chess moves).[/li][li]It is possible to specify a system D such that one can use it to implement f (binary addition) and f’.[/li][li]f and f’ are distinct computations (follows from the definition of computation above: they’re different functions over the natural numbers).[/li][li]The difference between D implementing f and D implementing f’ is one of interpretation: the states of D, or aspects thereof, are interpreted as symbolic vehicles whose semantic content pertains to the formal specification of either f or f’.[/li][li]The process applied to use D as implementing both f and f’ is completely general, and can be performed on every system claimed to perform some computation C in order to use it to compute C’.[/li][li]Interpretation is a mental faculty (as in, a faculty our minds do have, not a property that necessarily only minds can have).[/li][li]Whether a system D implements a computation is thus dependent on mental faculties.[/li][li]All mental faculties, on computationalism, are due to some computation C[sub]M[/sub].[/li][li]Whether D implements a computation is thus due to the particulars of C[sub]M[/sub].[/li][li]By specification, whether B implements C[sub]M[/sub] is thus due to the particulars of C[sub]M[/sub].[/li][li]Hence, computationalism lapses into vicious circularity.[/li][/ol]
There. I hope that makes things clear. It should, in particular, be obvious now that nothing about the impossible of minds in general is implied; merely, that minds, which clearly do exist, are not computational in nature, that is, are not produced by the brain implementing a certain computation.
Now, I believe you won’t give up on the ‘but C[sub]M[/sub] just interprets itself’-issue so easily. In that case, I have good news for you, because you can prove the existence of god!
For, god is an omnipotent being. Omnipotence entails the possibility to create gods. Consequently, god may just create him/hir/herself. And that’s it!
Logically, this is perfectly analogous to C[sub]M[/sub] interpreting B as computing C[sub]M[/sub].
It doesn’t appear possible with the technology we currently have. We have really kick-ass computers, but they are not self-aware. So, there is no reason to assume that your “consciousness” would survive death. Even if we had the technology necessary to convert RNA based memory into 1’s and 0’s, it would just be data. It wouldn’t be a person.
“Ghost in a Shell” had science approaching it in a different way. They perfected something they called a “Cerebral Salvage”, which was implanting a human brain in a completely artificial body and integrating them so that they worked together just like our brains and organic bodies work together.
It really comes down to what is actually meant by self-awareness - I generally think of it as that the entity in question is continuously aware of its ongoing calculative processes, and has the capability to interpret and make decisions based on them. The sticking point, of course, is the “aware” part; we humans have a ‘in the driver’s seat’ perspective on the world that we’re hard-pressed to explain or even describe. I find myself wondering whether even a simple computer program that assesses inputs, outputs, and its own data storage might, from its own perspective, have a ‘drivers’s seat view’. I mean, we do program them to - they have a main execution loop that is constantly running back and forth through its event handlers and dealing with inputs as they come. The only think lacking is that the main loop doesn’t have access to the data handled by the event handlers - but at what level would the ‘driver’s seat’ manifest? One could theorize that the self-contained execution loop gives rise to the ‘drivers’s seat view’ but that the ‘view’ included the data handlers as well.
This is, of course, all frivolous theorizing, but one thing that stands out to me is that I’ve never heard any other explanation for where or how that ‘drivers’s seat view’ originates. I mean sure, there’s bullshit about souls, but that just kicks the can down the street - how do souls do it, mechanically speaking? Whether material or spiritual the process is being carried out somehow.
And now, on to attempting to see if HMHW’s explanation makes some sense to poor old simple me, and to see if it accomplished the literally impossible and draws a distinction between the function of a physical brain and its functionally exact copy.
The way you frame this already presupposes a speculative and contentious particular view of consciousness, known as functionalism. But functionalism, while it has a large following, is hardly the only game in town when it comes to consciousness; even the straightforwardly physicalist account isn’t functionalist. So on most current ideas of how consciousness works, that it’s the function of a physical brain is simply false.
I think that both you and wolfpup should take a look at Integrated Information Theory, which posits that consciousness is either due to or constituted by the information distributed among the parts of a system in a certain way. In a simplified sense, integrated information can be thought of as the amount of information the whole of a system has over and above that which is locked in its parts. IIT then postulates that this is what consciousness either correlates with or comes down to.
Now, I think in doing so, IIT is massively begging the question. But that doesn’t matter right now. What’s important is that IIT gives a scientifically respectable reductive account of consciousness—it shows us exactly how (what it purports to amount to) consciousness emerges from the simpler parts, and gives us objective criteria for its presence. So, IIT provides the sort of story that’s needed to make wolfpup’s emergence reasonable.
Furthermore, on IIT, consciousness isn’t a functional property of the brain. It’s a property relating to the correlations between the brain’s parts, or regions. So functionalism isn’t really the sort of obvious truth begbert2 seems to take it to be, either.
More interestingly, perhaps, is that on IIT, computationalism comes out straightforwardly false. While a brain has a high degree of integrated information, the typical computer, due to its modular architecture, has very little to none of it. So a computer implementing the simulation of a brain won’t lead to conscious experience, even if the brain it simulates does.
This suffices to falsify many of the seemingly ‘obvious’ claims in this thread and shows that other options are possible.
[ul][li]The difference between D implementing f and D implementing f’ is one of interpretation: the states of D, or aspects thereof, are interpreted as symbolic vehicles whose semantic content pertains to the formal specification of either f or f’.[/ul][/li][/QUOTE]
This is nonsense. I’m a computer programmer, and there is a literal, practical, real difference between implementing f and f’. This is true regardless of whether the outputs are the same for a given input. An obvious example is sorting; there are a numerous different algorithms all of which produce the same output: a sorted list. Quicksort, Mergesort, Insertion Sort, Bubble Sort, Stoogesort. However they differ internally quite a bit, and operate quite a bit differently despite producing the same output, going through states that are entirely and objectively distinct from one another. The differences between them are NOT merely a matter of ‘interpretation’.
[ul][li]By specification, whether B implements C[sub]M[/sub] is thus due to the particulars of C[sub]M[/sub].[/li][li]Hence, computationalism lapses into vicious circularity.[/ul][/li][/QUOTE]
I’m not seeing any circularity, viscious or not. You’re basically saying that if you reach into a bag of red and blue balls and happen to pull out the red one, the fact that it’s red (as opposed to being blue) is due to the fact it’s a red one. It’s not a particularly helpful observation, but it’s not circular in any sort of problematic way.
Or to put it in terms of computational processess, if you have two physically identical machines, one of which is running Quicksort and one of which is running Stoogesort, then the only thing that is making the Quicksort one the Quicksort one is the fact it’s running the Quicksort one. It is distinct from the Stoogesort one (among other things it’s a lot faster), but the difference in what it is is indeed a result of it being what it is.
None of this is self-contradictory.
And honestly, even if you were right about computationalism being impossible (which you’re not), that doesn’t mean that brains can’t be copied to computers - it would just mean that it’s equally impossible for existing computers to be computational! Which would mean that existing computers are still exactly as capable as brains are to support minds, because none of them are the thing that’s impossible. Which makes sense, because existing computers aren’t impossible and thus if you prove that something is impossible to do, they’re not doing it.
Your argument (if it worked, which it doesn’t) doesn’t just disprove minds being computational - there’s nothing about your argument that’s specific to mind. It just proves that nothing is computational! Not brains, not computers, nothing! (If the argument worked, which it doesn’t.)
It isn’t equvalent at all, of course. Your god argument is trying to pull itself up by its bootstraps; the god in question doesn’t even exist until it exercises its powers. The brainstate on the other hand does exist. Cognition doesn’t predate the brainstate or cause the brainstate; the brainstate exists in the phsyical or simulated realm supported by the phsyical or simulated physics of the world it exists in. Cognition, on the other hand, is the byproduct of the ongoing brain state - the same way that sound is the byproduct of an ongoing vibration in a speaker’s membranes.
None of this is analogous to nothing creating something from nothing, none of it is circular, and none of it is self-contradictory.
It’s referring to various studies that show you can take RNA from an animal that has learned a task, inject it into a different animal (same species) and successfully test for the trained task in the second animal.
There are other studies that don’t specifically say anything about RNA, but have shown the ability for isolated individual purkinje cells to learn temporal sequences, which means our biology is using multiple methods for learning.