Downloading Your Consciousness Just Before Death.

Will it be held against me if I don’t have time right this moment to read the whole thread but want to ask a question that might have been answered already? Here goes: define exactly what you mean by downloading my consciousness, or I simply have no way to reply.

Thanks.

And yet, it’s literally what happens, as I showed by example! Of course, that example appears to be, unfortunately, invisible to you, or else I can’t really explain your stubborn refusal to just go look at it.

Again, none of this has any relation to my argument.

Well, it at least demonstrates admirable confidence that you think you understand IIT better than its founders, with nothing but a cursory glance!

Regardless, this is not my interpretation of IIT, but one of its core points. Take it from Christoph Koch:

Nowhere, but given the base configuration, it’s completely clear how one could teach it Go, which will change its base configuration (altering neuronal connection weights via backpropagation or some similar process), and after which the new base configuration in every case forms a sufficient explanation of AlphaGo’s Go-playing capacity, without any need to appeal to emergence whatsoever.

You can exactly prove what sort of tasks a neural network is capable of learning (arbitrary functions, basically), you can, at every instant, tell exactly what happens at the fundamental level in order for that learning to happen, and you can tell exactly how it performs at a given task solely by considering its low-level functioning. This is an exact counter-example to the claims you’re making. For a good explanation of the process, you might be interested in the ‘Deep Learning’-series on 3Blue1Brown.

The people who built AlphaGo didn’t just do it for fun, to see how it would do. They knew exactly what was going to happen—qualitatively, although I’d guess they weren’t exactly sure how good it was going to get—and the fact that they could have this confidence stands in direct opposition to your claims. Nobody just builds a machine to see what’s going to happen; they build one precisely because their understanding of the components allows them to say so with pretty good confidence. Sure, there’s no absolute guarantee—but as you said, surprise isn’t a sufficient criterion for emergence. Sometimes a bridge breaks down to the surprise of everybody; that doesn’t entail that bridges have any novel qualitative features over and above those of bricks, cement, and steel.

I have also emphasized that I think computational modeling is effective and valuable, which you conveniently missed. The one claim that I take issue with is that the brain is wholly computational in nature, and that, in particular, consciousness is a product of computation. That’s however a claim that cognitive science can proceed without making, and whose truth has no bearing on its successes so far.

You’ve missed it at least two times now, and you’ll probably miss it a third time, but again: an orrery is a helpful instrument to model the solar system, and one might get it into one’s head that the solar system itself must be some giant world-machine run on springs and gears; but the usefulness of the orrery is completely independent of the falsity of that assertion.

Even Fodor, as you noted, didn’t believe the mind is wholly computational. That’s the whole point of The Mind Doesn’t Work That Way: Computational theory is a ‘large fragment’ of the truth, but doesn’t suffice to tell the whole story (in particular, abduction, I think, is one thing Fodor thinks the mind can do that computers can’t). So leaning onto Fodor to support your assertion that computationalism is ‘the only game in town’ isn’t a great strategy.

I’m not going to defend Dreyfus here, but you have been given a seriously one-sided view. Dreyfus was an early proponent of embodied cognition views that are increasingly gaining popularity, and much of his critique on GOFAI is now even acknowledged by AI researchers to have largely been on point.

It’s not a premise, it’s a conclusion. I’ve asked you a couple of questions about the connection between the fundamental-level facts and the emergent facts in my previous post, which you neglected to answer, because the answers expose what should really be clear by now: either, the emergent properties are entailed by the base properties—then, the sort of emergence you claim consciousness to have doesn’t exist, and I’m right in arguing that you owe us some sort of inkling as to how consciousness may emerge from lower-level facts, as for instance given in IIT (see my example above). Or, there is genuine novelty in emergence; but then, specifying the fundamental, physical facts doesn’t suffice to fix all the facts about the universe, and thus, physicalism is wrong.

Which is a separate issue from the fact that such emergence, obviously, doesn’t occur in computers, which are the poster children of reducibility.

I’m still waiting for you to tell me what my example system computes, by the way. I mean, this is usually a question with a simple answer, or so I’m told: a calculator computes arithmetical functions; a chess computer chess moves. So why’s it so hard in this case? Because, of course, using the same standard you use in everyday computations will force you to admit that it’s just as right to say that the device computes f as that it computes f’. And there, to any reasonable degree, the story ends.

You can be aware of your environment without being self-aware. Besides sleep walking, an example is a dream incorporating sounds from the surroundings.
Or like any animal.

I’d figure that an interrupt system is more likely. If you touch a hot stove, I don’t think your brain is polling your nerve endings. They interrupt your thoughts - while also causing involuntary actions, just as interrupts can do.

Computers have modular functional units. So does the brain. But not necessarily modular information. In fact memory hierarchies are designed so that their hierarchical nature is invisible to the program, except perhaps as regards to access time. Thus, information integration inside a modular computer system is not necessarily modular.

I’m not a proponent of IIT, but, as far as I understand it, the issue is: you can calculate (approximately, at least) the integrated information between the parts of a physical system (actually, what to consider a system’s ‘parts’ is only determined, via a minimization, upon calculating—essentially, you split the system up such that the integrated information is minimal), essentially calculating a quantity that’s somewhat similar to the mutual information. For a brain, that gets you a high number, for a computer, a low one.

So in that sense, the integrated information is a physical quantity that’s present in a system, but not, in general, in a system that’s simulating that system—just like the mass of a black hole isn’t present in a simulation of said black hole (which I gather is rather a good thing).

In an effort to keep it brief, I’m omitting those points where I would simply be repeating myself.

No, this is just backpedaling on statements that you clearly made. You stated, among the many other such statements that I quoted in #179, “I gave an argument demonstrating that computation is subjective, and hence, only fixed by interpreting a certain system as computing a certain function. If whatever does this interpreting is itself computational, then its computation needs another interpretive agency to be fixed, and so on, in an infinite regress; hence, whatever fixes computation can’t itself be computational.

This can ONLY be interpreted as “no cognitive processes at all can be computational”, since ANY such computation would, according to your claim, require an external interpretive agent. If true, that would invalidate CTM in its entirety. Could you possibly have meant that? Why, yes, you totally could: “The CTM is one of those rare ideas that were both founded an dismantled by the same person (Hilary Putnam). Both were visionary acts, it’s just that the rest of the world is a bit slower to catch up with the second one.

Only when challenged on it are you now offering creative re-interpretations. But perhaps you’d like to take on the creative challenge of re-interpreting what you meant by CTM having been “dismantled”.

“Wholly computational” was manifestly never my claim, and I was clear on that from the beginning. And if it had been, I’d certainly never lean on Fodor for support, as he was one of the more outspoken skeptics about its incompleteness, despite his foundational role in bringing it to the forefront of the field.

A Turing machine starts with a tape containing 0110011. When it’s done the tape contains 0100010. What computation did it just perform?

My answer is that it’s one that transforms 0110011 into 0100010, which is objectively a computation by definition, since it is, after all, a Turing machine exhibiting the determinacy condition – even if I don’t know what the algorithm is.

Your answer would appear to be that it’s not a computation at all until it’s been subjectively understood by you and assigned a name. I think Turing would disagree.

I think the core of the problem here is that you’re confusing “computation” with “algorithm”. But as Turing so astutely showed, the question of what a “computation” is, in the most fundamental sense, is quite a different question from asking what class of problem is being solved by the computation.

You should have instead kept it longer in an effort to reply to the points you keep omitting…

The computational theory of mind is the statement that computation is all the brain does, and, in particular, that consciousness is computational. This, I indeed have shown to be in error.

That does in no way imply that no process that goes on in the brain is computational. I’ve been careful (to no avail, it seems) to point out that my argument threatens solely the interpretational abilities of minds: they can’t be computational. Using these interpretational powers, it becomes possible to assign definite computations to systems—after all, I use the device from my example to compute, say, sums.

Furthermore, even systems that aren’t computational themselves may be amenable to computational modeling—just as well as systems that aren’t made of springs and gears may be modeled by systems that are, like an orrery, but I suspect where these words are, you just see a blank space.

I hold consistently to the same position I did from the beginning: computational modeling of the brain is useful and tells us much about it, but the mind is not itself computational. I have been very clear about this. Take my very first post in this thread:

There, I clearly state that whatever realizes the mind’s interpretational capacity can’t be computational, and thus, minds can’t be computational on the whole. That doesn’t entail that nothing about minds can be computational. That would be silly: I have just now computed that 1 + 4 equals 5, for instance.

Also, I have been clear that my arguments don’t invalidate the utility of computational modeling:

Then why take issue with my claim of demonstrating a non-computational ability of the mind?

As such, the question is underdetermined: there are infinitely many computations that take 0110011 to 0100010. This isn’t a computation, it’s rather an execution trace.

But of course, I know what you mean to argue. So let’s specify a computation in full: say, the Turing machine has an input set consisting of all seven bit strings, and, to provide an output, traverses them right to left, replacing each block ‘11’ it encounters with ‘10’. Thus, if produces ‘0100010’ from ‘0110011’, or ‘1000000’ from ‘1111111’, or ‘0001000’ from ‘0001100’.

This is indeed a fully formally specified, completely determinate computation. You’ll note it’s of exactly the same kind of thing as my functions f and f’. So why does a Turing machine execute a definite computation?

Simple: a Turing machine is a formally specified, abstract object; its vehicles are themselves abstract objects, like ‘1’ and ‘0’ (the binary digits themselves, rather than the numerals).

But that’s no longer true for a physical system. A physical system doesn’t manipulate ‘1’ and ‘0’, it manipulates physical properties (say, voltage levels) that we take to stand for or represent ‘1’ or ‘0’. It’s here that the ambiguity comes in.

If you were to build the Turing machine from your example, then all it could do would be to write 1 or 0 (now, the numerals, not the binary digits) onto its tape. Anybody familiar with the Arabic numbers could then grasp that these ink-blots-on-paper are supposed to mean ‘1’ and ‘0’ (the binary digits, again). But, somebody grown up on Twin Earth that’s identical to ours except for the fact that 1 means ‘0’ and 0 means ‘1’ would, with equal claim to being correct, take the Turing machine to implement a wholly different computation; namely, one where every ‘00’ string is replaced by a ‘01’-string.

That’s why I keep asking (and also, why you keep neglecting to answer): what computation is implemented by my example device? You’re backed into a corner where you’d either have to answer that it’s a computation taking switch-states to lamp-states, in which case the notion of computation collapses to that of physical evolution, or agree with me that it can be equally well taken to implement f and f’.

Although I note that you seem to have shifted your stance here somewhat—or perhaps, it hasn’t been entirely clear from the beginning: you’ve both argued that the two computations are the same (which amounts to accepting they’re both valid descriptions of the system, just equivalent ones, which starkly conflicts with you singling out a function of the same class as individuated computation in this post), and that multiple interpretations become, for reasons vaguely tied to ‘emergence’, less likely with increasing complexity. So which is it, now?

Perhaps for one last time, let me try and make my main point clear in a different way. Symbols don’t intimate their meanings on their own. Just being given a symbol, or even a set of symbols with their relations, doesn’t suffice to figure out what they mean. This is what the Chinese Room actually establishes (it fails to establish that the mind isn’t computational): given a set of symbols (in Chinese), and rules for manipulating these, it’s in principle possible to hold a competent conversation; but it’s not possible to get at what the symbols mean, in any way, shape, or form.

Why is that the case? Because there’s not just one thing they could mean. That must be the case, otherwise, we could just search through meanings until we find the right one. But it just isn’t the case that symbols and rules to manipulate them, and relationships they stand in, determine what the symbols mean.

But it’s in what their physical properties mean that physical systems connect to abstract computation. Nothing else can be right; computations aren’t physical objects, and the only relation between the physical and the abstract is one of reference. So just the way you can’t learn Chinese from manipulating Chinese letters according to rules, you can’t fix what computation a system performs by merely having it manipulate physical properties, or objects, according to rules. An interpretation of what those properties or objects mean, what abstracta they refer to, is indispensable.

But this reference will always introduce ambiguity. And hence, there is no unambiguous, objective computation associated with a physical system absent it being interpreted as implementing said computation.

We don’t have to “disprove” your claim anymore than one has to “disprove” the existence of God. You are making a claim, but it has never been done. The onus of proof is on you.

What if that same transformation was performed by HMHW’s light switch box - would that be considered a computation?

Is that transformation always a computation regardless of the nature of the machinery that performed it?

If you’re talking about your examples in posts 18 and 93, then you literally don’t know what you’re talking about. Your back-assward argument is that given a closed calculation machine, a given input, and the output from it, you won’t be able to unambiguously infer the internal process of the machine. This much is true. From that you make the wild leap to the claim that there isn’t an unambiguous process inside the machine. This is stupid. How can I put this more clearly? Oh yeah. That’s incredibly stupid.

If you have a closed calculation machine, a so-called “black box”, the black box has internal workings that in fact have an unambiguous process they follow and an unambiguous internal state, whether or not you know what it is. We know this to be the case because that’s how things in the real world work. And your knowledge of the internal processes is irrelevant to their existence. Or put another incredibly obvious way, brains worked long before you thought to wonder how they did.

So. Now. Consider this unambiguous process and unambiguous internal state. Because these things actually exist, once you have a black box process in a specific state, it will proceed to its conclusion in a specific way, based on the deterministic processes inside advancing it from one ambiguous internal state to the next. The dominos will fall in a specific order, each caused by the previous domino hitting them. And if you rewound time and ran it again, or made an exact copy and ran it simultaneously from the same starting state, it will proceed in exactly the same way following the same steps to the same result.

(Unless the system is designed to introduce randomity into the result, that is, but that’s a distracting side case that’s irrelevant to the discussion at hand. The process is still the same unambiguous process even if randomity perturbs it in a different direction with a different result. And I’m quite certain based on observation that randomity has a negligible effect on cognition anyway.)

So. While you think that your example includes a heisenberg uncertainty machine that has schroedinger’s internals which simultaneously implement f and f’ and thus hold varying internal states at the same time, in actual, non-delusional fact if you have a specific deterministic machine that has a specific internal state that means that it must be in the middle of implementing either f or f’, and not both. This remains true regardless of the fact that you can’t tell which it’s doing from eyeballing the output. Obviously.

Your argument is entirely reliant on the counterfactual and extremely silly idea that things can’t exist without you knowing about them. Sorry, no. The black box is entirely capable of having a specified internal process (be it f, f’, or something else) without consulting you for your approval.

Your argument is that the “interpretation” of an outside observer has some kind of magical impact on the function of a calculative process, despite there being no possible way that the interpreter’s observation of the outside of the black box can impact what’s going on inside it.

Or at least that’s what you’ve been repeatedly saying your argument is. I can only work with what you give me.

I note that while he baldly asserts that simulations can’t be conscious, the only reason he gives for this is that physical matter is magic. He even admits that if you built an emulation of the conscious thing it would behave in exactly the same way, with the same internal causal processes (the same f, in other words), and despite functioning and behaving exactly the same as the original it would still be a philosophical zombie because reasons.

He then goes on to insist that he’s not saying that consciousness is a magic soul, before clarifying that he’s saying that physical matter has a magic soul that’s called ‘consciousness’.

I’m sure he’s a very smart fellow, but loads and loads of smart fellows believe in magic and souls and gods. Smart doesn’t mean you can’t have ideological beliefs that color and distort your thinking.

So yeah. To the degree that IIT claims that physical matter has magical soul-inducing magic when arranged in the correct pattern to invoke the incantation, I understand it better than it does, because I recognize silliness when I see it. You think I’m misstating his position? First you have to “build the computer in the appropriate way, like a neuromorphic computer”… and then consciousness is magically summoned from within the physical matter as a result! But if you build the neuromorphic computer inside a simulation “it will be black inside”, specifically because it doesn’t have physical matter causing the magic.

So take heart! You’re not the only person making stupid nonsensical arguments. You’re not alone in this world!

P1: Cognition is a property or behavior that exists in the physical world.

P2: If an emulation is sufficiently detailed and complete, that emulation can exactly duplicate properties and behaviors of what it’s emulating.

P3: It’s possible to create a sufficiently detailed and complete emulation of the real world.

C1: It’s possible to create an emulation that can exactly duplicate properties and behaviors of the real world. (P2 and P3)

C2: It’s possible to create an emulation that can exactly duplicate cognition. (C1 and P1)
So there’s my proof. It’s logically sound, so the way to refute it is to attack the premises. Here’s how that goes:

Refutation of P3: “Emulation can’t simulate reality, and never will!”
Refutation of P2: “Emulation is impossible!”
Refutation of P1: “Cognition is magic! WOOO!”

Choose whichever you like. (Christoph Koch chooses P1.)

Can’t speak for anyone else, but I interpret it as creating a process inside a computer that has the same thoughts and memories and beliefs and such as you do, and has a separate conscious awareness of its reality from the one you have. Since he has all the same memories as you he will quite naturally think he is you, until somebody convinces him otherwise.

What it’s not, is the digitizing of the whole physical person a la Tron. I mean you could do that, but it really just amounts to destroying the original person as part of the process of scanning them for the information to make the digital copy. (And putting that copy in a fancy sci-fi outfit in the process.) Of course the Tron scenario allows for some confusion/obfuscation of whether some kind of immortal soul left over from the now-disintegrated person somehow locates and attaches itself to the simulated digital avatar, which honestly just seems rife with implementation problems. (Especially since they were just trying to copy an apple.)

It depends on what question you’re asking. If you’re concerned about whether a device is Turing equivalent, you need to understand what it’s actually doing. But when computation is viewed simply as the output of a black box, it’s always reducible to the mapping of a set of input symbols to a set of output symbols. So I take the view that any black box that deterministically produces such a mapping for all combinations of inputs has to be regarded as ipso facto computationally equivalent to any other that produces the same mapping, without reference to what’s going on inside it. Of course, the mechanisms involved may be trivial, like a simple table lookup, that may not provide any insights into the nature of computation and may not be Turing equivalent.

I think it would be hard to calculate this number based only on the architecture of a computer, not what it was running.
Information, unlike mass, must be present in a simulation of something with that information. If you simulate telephone traffic, say, you don’t need switch hardware but you do need the contents of the calls. This is simulation, not modeling, where you can describe the traffic mathematically without the information in them.
That’s information, not integrated information of course. I did read at your link but found it less than interesting.

This is flat-out wrong, as evidenced by Fodor’s statements that CTM is an indispensably essential theory explaining many aspects of cognition, while at the same time he never even imagined that anyone would take it to be a complete description of everything the mind does. Your characterization of the computational theory of mind is simply wrong in that it fundamentally misrepresents how CTM has been defined and applied in cognitive science. And CTM doesn’t even attempt to address consciousness, regarding it as an ill-defined problem. I’ve provided my own speculations about it, and those you’re free to disagree with, but when you make arguments that mischaracterize what CTM means in cognitive science you can expect to be corrected.

(emphasis mine)
I see a “blank space” where you provide your ruminations about CTM being somehow related to “computational modeling” because it’s so egregiously wrong. Please note the following commentary from the Stanford Encyclopedia of Philosophy. They refer to the family of views I’m talking about here as classical CTM, or CCTM, to distinguish them from things like connectionist descriptions. CCTM is precisely what Putnam initially proposed and was then further developed into a mainstream theory at the forefront of cognitive science by Fodor (bolding mine):
According to CCTM, the mind is a computational system similar in important respects to a Turing machine … CCTM is not intended metaphorically. CCTM does not simply hold that the mind is like a computing system. CCTM holds that the mind literally is a computing system.
https://plato.stanford.edu/entries/computational-mind/#ClaComTheMin

It then goes on to describe Fodor’s particular variant of CCTM:
Fodor (1975, 1981, 1987, 1990, 1994, 2008) advocates a version of CCTM that accommodates systematicity and productivity much more satisfactorily [than Putnam’s original formulation]. He shifts attention to the symbols manipulated during Turing-style computation.

This is of course exactly correct. The prevalent view of CTM that was first advanced by Fodor and then became mainstream is that many cognitive processes consist of syntactic operations on symbols in just the manner of a Turing machine or a digital computer, and he further advanced the idea that these operations are a kind of “language of thought”, sometimes called “mentalese”. The proposition is that there is a literal correspondence with the operation of a computer program, and it has no relationship to your suggestions of “modeling” or of doing arithmetic in your head.

Because it’s wrong, for the reason cited above.

I appreciate the effort you made to once again detail your argument, but I find the view that there is some kind of fundamental difference between an abstract Turing machine and a physical one because the former manipulates abstract symbols and the latter manipulates physical representations to be incoherent. They are exactly the same. The Turing machine defines precisely what computation is, independent of what the symbols might actually mean, provided only that there is a consistent interpretation (any consistent interpretation!) of the semantics.

Let me re-iterate one of my previous comments. Our disagreement seems to arise from your conflation of “computation” with “algorithm”. The question of what a “computation” is, in the most fundamental sense, is quite a different question from what problem is being solved by the computation. Your obsession with the difference between your f and f’ functions is, at its core, not a computational issue, but a class-of-problem issue.

In the real world this is an important problem. We need to prove that the implementation of a specification is equivalent to the specification. It turns out that this is basically impossible without being able to see the inside of the black box, even for large systems without internal memory, and practically impossible for those with memory.
Of course you have to agree on the input and output symbols, and they must be consistent across computational systems. This doesn’t seem to be a requirement for HMHW’s view of interpretation.
In other words, Lincoln was wrong - a horse does have five legs if you interpret the tail as a leg.

The question I’m asking is whether the nature of the machinery performing a transformation determines whether something is a computation or not, from your perspective.

It sounds like you are saying that if HMHW’s box performs the transformation you listed (0110011 into 0100010) then that is considered a computation, right?

Meaning that HMWH’s box may not be a Turing machine, it may just be a circuit that performs that transformation, but regardless of how it arrives at the correct answer, the transformation is considered a computation, right?

No. Not even close. I haven’t said anything about internal processes at all, they’ve got no bearing or relevance on my argument. The argument turns on the fact that you can interpret the inputs (switches) and outputs (lights) as referring to logical states (‘1’ or ‘0’) in different ways. Thus, the system realizes different functions from binary numbers to binary numbers. I made this very explicit, and frankly, I can’t see how you can honestly misconstrue it as being about ‘internal processes’, ‘black boxes’ and the like.

OK. So, the switches are set to (down, up, up, down), and the lights are, consequently, (off, on, on). What has been computed? f(1, 2) ( = 1 + 2) = 3, or f’(2, 1) = 6? You claim this is obvious. Which one is right?

The internal wiring is wholly inconsequential; all it needs to fulfill is to make the right lights light up if the switches are flipped. There are various ways to do so, if you feel it’s important, just choose any one of them.

Because what goes on inside has no bearing on the way the system is interpreted. You can think of it in the same way as reading a text: how it was written, by ink on paper, by pixels on a screen, by chalk on a board, has no bearing on whether you can read it, and what message gets transported once you do. Your language competence, however, does: where you read the word ‘gift’, and might expect some nice surprise, I read it as promising death and suffering, because it means ‘poison’ in German. In the same way—exactly the same way—one can read ‘switch up’ to mean ‘0’ or ‘1’. And that’s all there’s to it.

Evidently not, to both our detriment.

I’m not going to defend IIT here, but it’s a very concrete proposal (much more concrete than anything offered in this thread so far) that’s squarely rooted in the physical.

Well, at least now I know it’s not just my fault that my arguments seem so apparently opaque to you.

Premise P2 is self-evidently wrong: if an emulation could exactly duplicate every property of a system, then it wouldn’t be an emulation, but merely a copy, as there would be no distinction between it and what it ‘emulates’. But of course, no simulation ever has all the properties of the thing it simulates—after all, that’s why we do it: we typically have more control over the simulation. For instance, black holes are, even if we could get to them, quite difficult to handle, but simulations are perfectly tame—because a simulated black hole doesn’t have the mass of a real one. I can simulate black holes all the live long day without my desk ever collapsing into the event horizon.

You’ll probably want to argue that ‘inside the simulation’, objects are attracted by the black hole, thus, it has mass. For one, that’s a quite strange thing to believe: it would entail that you could create some sort of pocket-dimension, with its own physics removed from ours, merely by virtue of shuffling around a few voltages; that it would be the case, even though the black hole’s mass has no effects in our dimension, there suddenly now exists a separate realm where mass exists that has no connection to ours save for your computer screen. In any other situation, you’d call that ‘magic’.

Holding that the black hole in the simulation has mass is exactly the same thing as holding that the black hole I’m writing about has mass. The claim that computation creates consciousness is the claim that, whenever I’m writing ‘john felt a pain in his hip’, there is actually a felt pain somewhere, merely by virtue of me describing it. Because that’s what a simulation is: an automated description. A computation is a chain of logical steps, equivalent to an argument, performed mechanically; there’s no difference to writing down the same argument in text. The next step in the computation follows from the previous one in just the same way as the next line in an argument follows from the prior one.

Come on, now. You’re fully aware that the core claim of CTM is, as wikipedia puts it,
[

](Computational theory of mind - Wikipedia)

Fodor indeed had heterodox views on the matter; but, while he’s an important figure, computationalism isn’t just what Fodor says it is. After all, it’s the ‘computational theory of the mind’, not ‘of some aspects of the mind’. Or, as your own cite from the SEP says,
[

](The Computational Theory of Mind (Stanford Encyclopedia of Philosophy))

I’m not saying that the CTM is related to computational modeling, I’m saying that computational modeling is useful in understanding the brain even if the mind is not wholly computational. For instance, a computational model of vision need not assume that the mind is computational to give a good description of vision.

If you intend this to mean that my argument is wrong just because Fodor (or his allies) don’t hold to it, then that’s nothing but argument from authority. You’ll have to actually find a flaw in the argument to mount a successful attack.

There is, really, only one set of questions you need to answer: if I use my device to compute the sum of two inputs, what is the device doing? Is it computing the sums? If not, then what is? If I use it to compute f’, what is the device doing?

Because that’s computation in the actual, real-world sense of the term, absent any half-digested SEP articles. My claim is nothing but: because I can use the system to compute f (or f’), the system computes f (or f’). There is nothing difficult about this, and it needs loops of motivated reasoning in order to make it into anything terribly complex or contentious.

I’ve explicitly left the algorithm that computes either function out of consideration. The computation is given by the function; the algorithm is the way the function is computed, the detailed series of steps being traversed. I can talk about a system implementing a computation without talking about the algorithm it follows. Again, that’s just the everyday usage of the term: I can say that a certain program sorts objects, without even knowing whether it implements mergesort, or quicksort, or bubblesort, or what have you.

So no, the algorithm has no bearing on whether the system implements f or f’. The reinterpretation of the symbolic vehicles it uses entails that any algorithm for computing one will be transformed into one for computing the other. Where it says ‘If S[sub]12[/sub] = ‘1’ Then …’ in one, it’ll say ‘If S[sub]12[/sub] = ‘0’ Then …’ in the other, with both referring to the switch being in the ‘up’ position, say.

Your “thus” is flat wrong and stupid. When the box in your original argument transformed its input into its output it used a specific approach to do so. It didn’t use all theoretically possible approaches to do so; it used one approach to do so. It doesn’t use “different functions” to map the inputs to the result, it uses only one function to do so. Which function? Whichever one it used. You can’t tell which from the outside, but frankly reality doesn’t give crap what you know.

You made it very explicit that your argument depends on a blatantly false assumption, and it’s not misconstruing things to point out that the realities of the function of black boxes and internal processes are what show that your assumption is blatantly false.

I claim it’s obvious that only one approach was used. Your interpretation of the result is completely irrelevant, particularly to what was going on inside the box. The inside of the box does whatever the inside of the box does, and your observation of the output and your interpretation of those observations have no effect on the box.

The internal wiring of the box is entirely, controllingly important to determining how that box functions. And more importantly to destroying your argument, it’s important in that the fact that the internal wiring must exist and must implement a specific function completely destroys that assumption you’re relying on.

So what? How you interpret the box’s output has no bearing on the box’s functionality.

I will readily concede that I don’t see why you think interpretation is even slightly relevant to anything. The box itself isn’t effected, and you can’t prove that calculation is internally inconsistent just by eyeballing some output. (Especially not with the massive false assumption your argument seems to hinge on.)

Yuh-huh.

Wrongness can indeed be copied from elsewhere.

You’re really, really coming off a somebody who doesn’t understand simulations, here. I’m not really sure how to explain simulations in brief, so I’ll just say “I accept that you think you’ve refuted P2, but you really, really haven’t.” Suffice to say that writing “john felt a pain in his hip” is not a particularly detailed and complete emulation.