Downloading Your Consciousness Just Before Death.

As I already pointed out, and as any good introduction on the subject will stress, Turing machines are an abstract mathematical model. As such, it manipulates abstract objects, like e. g. Boolean truth values; thus, a TM implementing a mapping from binary strings to binary strings will directly implement the function associated with that.

As wikipedia puts it right in the first sentence:
[

](Turing machine - Wikipedia)

Following the links, we learn that a mathematical model of computation is:
[

](Model of computation - Wikipedia)

And likewise, an abstract machine is:
[

](Abstract machine - Wikipedia)

Consequently, the Turing machines used to define computation are theoretical abstractions that compute mathematical functions. Such an abstraction has to be distinguished from its concrete realization. Actually trying to build a Turing machine will leave you with a physical system that does not directly connect to anything abstract; rather, you will have to interpret it properly. The argument I’ve given can then be exactly repeated on the level of the TM’s machine table to show that the same physical system can be interpreted to implement different abstract Turing machines.

This isn’t different, for example, from the case of an abstract, Boolean gate, versus its concrete realization, say by means of an electronic circuit. The abstract Boolean AND-gate is defined by its truth table, which is given in terms of Boolean truth values; if both inputs are ‘1’ or ‘true’, then so is the output. But any concrete physical realization will have physical states, such as voltage values, at its in- and outputs. Computation, however, is not done over voltage values, but (in this case) over Boolean truth values. Thus, these voltage values have to be interpreted as Boolean truth values. The fact that this interpretation is never unique then means that the same physical system can be considered to implement different abstract gates.

This doesn’t entail that the Boolean gate, or equivalently, a given TM, does not uniquely implement a computation, but merely that the abstract machine is not uniquely associated to a given physical system.

Note that I also said that one might further relax this. But of course, for my argument, it doesn’t matter whether it’s ten, ten thousand, or infinitely many computations that one could associate to a given physical system—the salient point is still that it’s not unique.

Sure. Equivalently, you could interpret the system as not just computing x[sub]1[/sub] + x[sub]2[/sub], but also, x[sub]1[/sub] + x[sub]2[/sub] + 1, x[sub]1[/sub] + x[sub]2[/sub] + 2, and so on. But in my experience, people will resist the notion that these are different computations, since they’re essentially completely isomorphic, unlike my f and f’. Hence, I prefer to use the more restrictive notion, in order to guard against such rejoinders.

Yes, but it doesn’t do anything but that. If that’s what’s meant by computation, then it either collapses to identity theory, or at most to logical behaviorism (if you’re happy to apply the same table to qualitatively different systems). Either case would be fatal to the computational theory of mind, and in fact, both approaches are generally thought to be untenable.

So basically, the argument must be wrong because it entails a conclusion you don’t like.

There are multiple tables you could associate to the box—in fact, roughly 2.8 * 10[sup]14[/sup]: I can interpret each switch, and each lamp, separately, so while S[sub]11[/sub] being ‘up’ means ‘1’, S[sub]12[/sub] being ‘up’ means ‘0’, for example. There is no reason different switches or lamps need to be interpreted in the same way. If this offends your intuition, simply consider the inputs and outputs to be realized differently—say, the inputs are a switch, a lever, a knob, and a button, while the outputs are a light that’s either green or yellow, a light that’s either blue or red, and a light that’s either orange or purple. That ‘orange’ means ‘1’ does not entail anything about whether ‘blue’ means ‘1’ or ‘0’, and the position of the lever does not fix the meaning of a knob.

The point is now that each such table corresponds to a different computable function over the alphabet of binary strings, and each has an equally valid claim to be implemented by the box.

Of course, every table is a different interpretation of the box. But the point is that these tables are the computations the system performs; thus, if the table differs, what’s being computed differs. If there is no uniquely right table, then there’s no uniquely right computation. Else, we’re back at the silliness that just claims that the behavior of the box is the computation—which, as noted, simply trivializes computationalism.

Again, the point is the following. I can clearly use the box to compute f, and f is a bona fide computation. What happens when I use it to compute f? Either, I do something computational to single out that one interpretation: then, a box ought to be possible that only computes f. Or, I don’t: then, computationalism is wrong.

Hence, the challenge is exactly on point: anybody that claims there is a unique fact of the matter regarding which computation a system performs can only substantiate that claim by showing that there is some system such that it computes f uniquely.

I can’t make heads or tails of this. Either, you’re claiming that f and f’ aren’t really computations; then, you’re just not using ‘computation’ to mean what it does in the context of computer science. Or, you’re claiming that there’s no fact of the matter regarding what my box computes; but then, you’re conceding my point.

The relationship between a piece of paper (or some other set of symbols) and a given object (whether physical machine or abstract concept) is exactly what I mean by ‘interpretation’, so I’m not sure what you’re getting at here.

The point remains that which table you associate to the box is arbitrary; the rules for the mapping aren’t something that’s fixed by the box itself. Furthermore, different tables correspond to different computations. Consequently, which computation we consider the system to perform depends on an arbitrary choice. That’s exactly what I’ve been pointing out, and suffices to throw computationalism over board.

Then I guess that must be why the wikipedia article on Dreyfus’ criticism of AI has an entire section on how much of what he has said has later been vindicated by the development of AI.

That’s a nearly totally vacuous statement. Anything reacts to specific inputs with responses that depend on that input—that’s just causality. A pebble, upon being kicked, will react by performing a parabolic arc whose parameters exactly depend on those of the kick. So this does not capture the notion of interpretation in the least.

This is truly disheartening. How does a computer do ‘that thing’? By executing a computation? If so, then, as you seem to agree that interpretation is necessary to perform a computation, that computer needs to first be interpreted in the right way in order to be able to implement the computation that decodes the symbols (which are not themselves abstract, by the way, but have abstract objects as their meaning).

But no, you’re completely oblivious to this, and just continue making this claim without even so much as a token attempt to justify it. This is truly bizarre; on the one hand, you recognize the need for interpretation in order to postulate a computer to do the interpretation, on the other, you are completely oblivious to the fact that if there’s a need for interpretation, then that further computer needs to be interpreted as well. It’s like a Christian explaining the origin of the universe as ‘God did it’, and then just sort of hoping that nobody will notice that this just kicks the question up a rung, to the origin of God.

So if the origin of a computation is a further computation, then what’s the origin of that computation?

If the first one needs to be interpreted, then why not the second one?

Because that’s not possible: no symbol ever just allows for a single interpretation. An unadorned and unlabeled black circle can be interpreted in any way whatsoever—as a zero, a representation of zen awakening, hell, even as the complete works of Shakespeare—all you need is a suitable code, that is, a table taking symbols to their meaning (by which I mean, already understood symbols).

As noted, the fact that brains can interpret things just means they’re not computational. Although, perhaps your brain is, and that’s why all you’re generating is meaningless symbols?

I have been clear from the beginning that a necessary prerequisite for my argument to apply is for there to be a dependence on interpretation in that system to which it applies. In fact, it’s in the very first sentence I ever posted to this thread:

I’ve even bolded the relevant part for your convenience. It’s only because computation involves the interpretation of symbolic vessels, of physical states of a system, as abstract objects, that my argument applies. Nothing which does not involve such interpretation—such as the process of digestion, or, of course, conscious experience—is within the scope of my argument, and your claim otherwise does nothing but demonstrate that you’ve still somehow managed to not grasp the argument’s core point.

This is just false. My argument shows that, in order to implement a computation, a physical system needs to be interpreted in a certain way. That the interpretation can’t be done computationally is then an immediate consequence. For suppose that it could. Then, there exists a computation C such that C yields the interpretation of the former system as performing a certain computation. But C must itself be implemented in some physical system P. However, in order to be implemented in P, P must be interpreted as implementing C. Thus, nothing has been won—the origin of the computation has just been kicked up the ladder one rung.

So no. I very explicitly do not assume that computers can’t do interpretation; I show that, if they could, we enter a vicious regress.

Again, the very simple way for you to show this to be wrong is just to post a single example of a computation interpreting another. You’ve tried that, and failed; and since, you’ve given up trying, just repeating the same ill-conceived notions again and again.

I’ve been very explicit about how what you claim I assume is, in fact, derived. So really, this charge just doesn’t stick, no matter how often you repeat it. If you’ve still got questions about the argument, you’re welcome to ask.

Your failing to understand the difference between the proposition involving “many” versus “infinite” possibilities seems to me to be rather a profound failing in grasping the major point here, on which subject I have more below.

No,*** this is not at all what you’ve been claiming***. I’m actually quite surprised that you would stoop to the position of having claimed to have made such a ridiculous assertion. That’s not at all what you claimed to mean by “interpretation”.

Your invention of multiple possible “interpretations” of the machine output has always been based on your contrived subjective interpretations of what the bits actually stand for – the semantics. This is how you arrived at the alleged difference between the f and f’ functions of your machine. But the material fact here is that the relationship between abstract symbols and their physical instantiations as bit values is a simple objective one-to-one mapping, as objectively defined by the table I provided. The relationship between the physical machine and its abstraction is merely in how one represents “0” and how one represents “1”. It has absolutely nothing to do with your philosophical balderdash about assigning high-level semantics to them, like whether they represent high-order exponentials, pixels, or sounds! There are an infinite number of your semantic interpretations, as I noted above. But I provided a simple table that simply and unambiguously defines the objective computation.

If I seem intent on driving in this point, it’s not out of pedantic interests, but from the position that it’s just intellectually shallow and arrogant and basically nonsensical to claim that one of the mainstays of modern cognitive science – that the mind is largely computational – is an impossibility.

What do you think the relevance of the number of computations that may be associated to a physical system is? The point is, there’s more than one. “Many” and “infinite” are both more than one, so work equally well. The difference is in how lenient you want to be in allowing the mappings between physical states and computational states; while I think there are ways to argue that it’s less than infinitely many—such as your contention that only the tables of binary numbers are proper associations—there’s none such that the number is brought down to one.

As I already explained to begbert2, the notion of interpretation I’ve been using has been exactly the same since my very first post, and essentially boils down to:

This is exactly the same as interpreting marks on paper as referring to some (abstract or concrete) objects.

There’s nothing objective about the mapping you provided. Your mapping (M1) was:



M1: 

 S11 | S12 | S21 | S22  || L1 | L2 | L3
---------------------------------------
  0  |  0  |  0  |  0   ||  0 |  0 |  0
  0  |  1  |  0  |  0   ||  0 |  0 |  1
  1  |  0  |  0  |  0   ||  0 |  1 |  0
  1  |  1  |  0  |  0   ||  0 |  1 |  1
  0  |  0  |  0  |  1   ||  0 |  0 |  1
  0  |  1  |  0  |  1   ||  0 |  1 |  0
  1  |  0  |  0  |  1   ||  0 |  1 |  1
  1  |  1  |  0  |  1   ||  1 |  0 |  0
  0  |  0  |  1  |  0   ||  0 |  1 |  0
  0  |  1  |  1  |  0   ||  0 |  1 |  1
  1  |  0  |  1  |  0   ||  1 |  0 |  0
  1  |  1  |  1  |  0   ||  1 |  0 |  1
  0  |  0  |  1  |  1   ||  0 |  1 |  1
  0  |  1  |  1  |  1   ||  1 |  0 |  0
  1  |  0  |  1  |  1   ||  1 |  0 |  1
  1  |  1  |  1  |  1   ||  1 |  1 |  0


This is a valid assignment of meanings to the physical symbols (switch positions and lamp states) of the box. But, so is my mapping (M2):



M2:

 S11 | S12 | S21 | S22  || L1 | L2 | L3
---------------------------------------
  0  |  0  |  0  |  0   ||  1 |  1 |  1
  0  |  1  |  0  |  0   ||  1 |  1 |  0
  1  |  0  |  0  |  0   ||  1 |  0 |  1
  1  |  1  |  0  |  0   ||  1 |  0 |  0
  0  |  0  |  0  |  1   ||  1 |  1 |  0
  0  |  1  |  0  |  1   ||  1 |  0 |  1
  1  |  0  |  0  |  1   ||  1 |  0 |  0
  1  |  1  |  0  |  1   ||  0 |  1 |  1
  0  |  0  |  1  |  0   ||  1 |  0 |  1
  0  |  1  |  1  |  0   ||  1 |  0 |  0
  1  |  0  |  1  |  0   ||  0 |  1 |  1
  1  |  1  |  1  |  0   ||  0 |  1 |  0
  0  |  0  |  1  |  1   ||  1 |  0 |  0
  0  |  1  |  1  |  1   ||  0 |  1 |  1
  1  |  0  |  1  |  1   ||  0 |  1 |  0
  1  |  1  |  1  |  1   ||  0 |  0 |  1


However, they are clearly distinct computations—distinct computable functions from four-bit strings to three-bit strings. Equally as valid would be:



M3:

 S11 | S12 | S21 | S22  || L1 | L2 | L3
---------------------------------------
  0  |  0  |  1  |  1   ||  1 |  1 |  1
  0  |  1  |  1  |  1   ||  1 |  1 |  0
  1  |  0  |  1  |  1   ||  1 |  0 |  1
  1  |  1  |  1  |  1   ||  1 |  0 |  0
  0  |  0  |  1  |  0   ||  1 |  1 |  0
  0  |  1  |  1  |  0   ||  1 |  0 |  1
  1  |  0  |  1  |  0   ||  1 |  0 |  0
  1  |  1  |  1  |  0   ||  0 |  1 |  1
  0  |  0  |  0  |  1   ||  1 |  0 |  1
  0  |  1  |  0  |  1   ||  1 |  0 |  0
  1  |  0  |  0  |  1   ||  0 |  1 |  1
  1  |  1  |  0  |  1   ||  0 |  1 |  0
  0  |  0  |  0  |  0   ||  1 |  0 |  0
  0  |  1  |  0  |  0   ||  0 |  1 |  1
  1  |  0  |  0  |  0   ||  0 |  1 |  0
  1  |  1  |  0  |  0   ||  0 |  0 |  1


Or even:



M4:

 S11 | S12 | S21 | S22  || L1 | L2 | L3
---------------------------------------
  0  |  0  |  1  |  1   ||  1 |  1 |  0
  0  |  1  |  1  |  1   ||  1 |  1 |  1
  1  |  0  |  1  |  1   ||  1 |  0 |  0
  1  |  1  |  1  |  1   ||  1 |  0 |  1
  0  |  0  |  1  |  0   ||  1 |  1 |  1
  0  |  1  |  1  |  0   ||  1 |  0 |  0
  1  |  0  |  1  |  0   ||  1 |  0 |  1
  1  |  1  |  1  |  0   ||  0 |  1 |  0
  0  |  0  |  0  |  1   ||  1 |  0 |  0
  0  |  1  |  0  |  1   ||  1 |  0 |  1
  1  |  0  |  0  |  1   ||  0 |  1 |  0
  1  |  1  |  0  |  1   ||  0 |  1 |  1
  0  |  0  |  0  |  0   ||  1 |  0 |  1
  0  |  1  |  0  |  0   ||  0 |  1 |  0
  1  |  0  |  0  |  0   ||  0 |  1 |  1
  1  |  1  |  0  |  0   ||  0 |  0 |  0


All of these are just as much associated to the physical system; each of these associations is just as valid as the other. All of them are perfectly distinct computable functions.

Exactly. And each different such association yields a different computation; thus, since the association is arbitrary, the computation performed by the system is likewise arbitrary.

As shown above, no, you didn’t. There are many inequivalent tables that have just as much claim as being ‘the’ computation performed by the system as yours. Claiming that these are all the same, once more, trivializes computationalism, yielding a collapse either to behaviorism or identity physicalism.

And I note that once again, you skipped over the salient part of my post: if what I do in order to compute f is computational, then there should be a system that uniquely implements f. Otherwise, computationalism is already false.

Of course, dealing with this issue entails an immediate collapse of your position, hence, I probably should not expect anything else.

Again, the same could have been claimed about phlogiston, or classical physics, or the uniform nature of space and time, and on and on and on. Any progress entails the rejection of outdated views. Holding on to current dogma just because, well, it’s current dogma is antithetical to making progress in our understanding.

And this isn’t just me making an out-there claim; this sort of triviality argument has been raised against computationalism from the beginning.

It is more clear to me that you are relying on a straw man then, you are using what in effect is a caricature of what serious researchers on AI are doing right now. GIGO indeed.

Well, we are getting somewhere then. I defer to **wolfpup **on the reasons why your logic is not quite there, I see your simplistic argument more like Zeno’s paradox, while in a way you can argue that it will be impossible for Achilles to reach the turtle, in practice Achilles does reach it and makes some soup. :slight_smile:

In this case it seems to me that you are ignoring that on my cite and sources I’m talking about actual published research of where things are going. Point being that** if you where correct progress would had been impossible. **Or to say it in another way: do you have any published research out there that is not from philosophical journals that show that what you are talking about is more than just a philosophical flight of a fanciful “it can’t be done” argument?

While new research does not explain everything yet, nor allows yet to download your memories to an electronic environment, it is one clear step to what it can be possible later. (Here I have to say that it would not be as many proponents of downloading minds would want to, IMHO it will be close but not as they would want to)

Again, what **wolfpup **said, but what you miss on the practical side is this: I have pointed at current research that shows that this is more complex than the simple caricature you make of the practical research,

Actually the arguments sound more like the ones made by Humpty Dumpty.

Well of course it does not much, because you are ignoring what it is being found, if you were correct then any progress, even by adding more computations, would not take place.

In which case then your version of interpretation does presume sentience, and your argument assumes its conclusion.

Your argument rests on your definitions, and your definitions are vague (deliberately so?) - too vague to be used in a proper, logical, non-fallacious argument.

Here ya go.

That’s the spec for the Z80 cpu. If you were to read that document, you’d learn that that chip can be send the machine code translation of assembly language. Assembly language, as you doubtlessly know, is a series of encoded instructions. These instructions (which look like this) are entirely arbitrary sets of codes that are quite literally instructions given to the chip which cause it to carry out specific actions. These arbitrary codes are interpreted by the workings of the chip and, based on which instructions it receives it does stuff. The decoding it does is highly context sensitive - the meanings of the parameters following the code change depending on which code they follow. And the actions it takes are objectively observable; it moves voltages around in real life, based on its interpretation of the machine code.

Wait, by precognitive powers are tingling:

‘Oh, no that’s not interpretation, because I say so! By my magically shifting definition the fact that it’s being done by a chip means it CAN’T be interpretation, because I’m assumed my conclusion that chips can’t do that! But if that chip were a squishy human brain then it COULD do interpretation, because I’m specially pleading that brain matter is literally magic!’

By the way, bolding mine. This statement of yours is a blatant lie and you know it. Stop it.

Do you not get what’s going on here? Apparently not.

When machine B interprets the output of machine A, that’s not it doing stupid fairy magic to make the workings of machine A valid or something. That’s it deciding that the behavior of machine are producing an output that it can interpret in a way that it finds useful.

If nothing is interpreting the output of B, then that just means that nothing cares what B has to say - assuming that B is saying anything. B might not have output beyond doing actions (aka “digestion” type output).

For a simple, you-could-have-thought-of-this-yourself example, a toy car’s remote control can be producing an output signal that the car’s antenna receives, interprets, and then acts on to move forward or backward or turn.

And thus all your demands that we produce a computer that can produce a symbol that just allows for a single interpretation are revealed to be bullshit.

I will give you this jokey insult for free because it’s simply a reflection of the jokey insult I made about you. No harm no foul on this one.

The first half of your statement is still nonsense of course.

Let me bold a different part of your sentence:

Consciousness can’t be downloaded into a computer, for the simple reason that computation is an act of interpretation, which itself depends on a mind doing the interpreting.

That’s the part where you’ve been presuming your conclusion for this entire thread. You are explicitly defining “interpretation” to require a mind, which you are axiomatically presuming that the computer doing the computation can’t supply and that a computer doing the interpretation can’t provide.

Your argument is circular, assumption-presuming nonsense. You’re using a vague and shifting definition of “interpretation” which you’re trying to conceal behind your definition of “computation”, but the fallacy shines through.

You admitted JUST ABOVE that your demand for the device to produce symbols of lights that just allow for a single interpretation is impossible. The fact that you think your argument justifies such a stupid demand shows that your argument is nonsense - you’re executing argumentum ad absurdum on yourself

And you can attempt to revise history about what I’ve posted here, but I was there, and it’s all preserved for all to see anyway.

Oh, I know all about how your argument is supposed to work. And why it doesn’t.

Just wanted to mention that this is a worthy thread for me, even if most goes WAY over my head. Makes my head hurt, but rearranging neurons (in this fashion) once in awhile is not a bad thing.
I appreciate (thanx Gigo HMHW, begbert2 and Wolfpup) links for laymen reading, as they help me frame this discussion, and follow along.
Now just merge this thread with one of the many free will discussions, it could really blow some minds:D:D

Ooh, I love a good free will discussion. Is there one going on currently? I gotta get in on that. :smiley:

Numenta does not do neuroscience research, they are a software company trying to sell a product (Hierarchical Temporal Memory, HTM) that has not been able to keep up with the industry in general (e.g. deep learning, etc.).

Here’s a recap:

There is a lot of progress learning about how the brain works, but it comes from researchers in neuroscience.

Unless you want to make the point that this is not neuroscience:

I think that Quora contributor is only looking at the software side of their research, they check the Neuroscience and look at the human brain so as to check on how to proceed.

Forgot to add: what they published more recently (in a Neurological Journal BTW) now is more about how neurons work and it is not just like old HTM, it is not everyday that one can find a researcher that finds that previous work was not enough or incomplete and reports later on what actually is more likely to be the case of how this works.

I think it’s interesting and possibly a valuable simulation (it adds information and understanding). But it doesn’t seem fair to call that neuroscience compared to the neuroscientists that are designing experiments, taking readings from neurons and dendrites, testing theories etc.

Lots of people read the work of the neuroscientists and then use the current incomplete information to try to create software and simulations and see what happens, I’ve done it myself, but that seems like a pretty low bar to call that “neuroscience”.

Yep, they are guessing at how it works based on the limited info available, just like everyone else in the group of people using biology as a guide (vs the purely math approach).

Are you thinking that is somehow different because they keep putting out press releases? (they are a commercial company trying to sell a product).
Lots of people have ideas, some of them might be right, but anyone that is paying attention to the rate of new information being learned about the brain, and the rate of change from previous simpler models should realize that we have a long way to go to actually understand how the brain does what it does.

And due to how far scientists are from really understanding how it works, it only makes sense to be humble and not prematurely state that you understand how it works and then produce software that can’t match the leaders in the industry.

I will have to say here that you are not reading the paper to claim that. It is not a just a model, it is based on previous neurological science and they publish, precisely for others to check if what the results they get can be confirmed, BTW one big reason why I check Hawkins and his crew is that they are willing to publish and get results of what they are doing, unlike many ponderers that insist on how “magical” this is.

Again, now you are ignoring that publishing in science journals is many orders of degree better than just putting press releases.

I guess you missed that Jeff Hawkings has practically said the same thing. Again that is why I said that he deserves some attention.

Good thing I did not say that, read it again.

The relevance is that you failed to grasp the fundamental nature of the argument, namely the fact that according to your description of what a “computation” is supposed to be, the number of possible computations is completely unbounded – infinite – and therefore meaningless. But that’s just a side issue. The central issue is the one I describe below.

Absolutely and totally wrong. So wrong that I have to wonder if you’re even arguing in good faith any more. Let’s review what you actually said at the beginning, in trying to establish the ridiculous homunculus argument that says that computation is not really computation unless there’s a “little man” observing and interpreting it (and therefore, the human mind cannot possibly be computational, despite a major scientific discipline being premised on precisely that idea!):

OK. Next, let’s look at my definition of what a “symbol” is in the context of both computer science and the computational theory of mind:

From my post #88:
A “symbol” is a token – an abstract unit of information – that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. The relationship to meaningful things – the semantics – is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program.

Perhaps you can see the fallacy of your argument at this point. If not, let me spell it out for you.

The mapping of symbol values to switch positions and lights is merely an arbitrary physical design specification for going between the syntax of an abstract computational specification and the the syntax of the physical machine. There are explicitly no semantics involved. I don’t care what the 1s and 0s stand for, the mapping is purely a transformation from the syntax of the abstract to the syntax of the physical.

Notice how this is fundamentally different from your f and f’ function examples, which are explicitly concerned with the semantics. Indeed, just look at your description, quoted above, of your f’ function. It’s cast in terms of a particular semantic interpretation of the machine’s behavior – a mathematical function. My table takes a completely different form and has no such baggage – it simply describes the syntax of the computation – the inputs and the outputs as 1s and 0s.

And herein ends my argument on this point.

Furthermore, as computations grow more complex, they themselves endow the symbols with semantics, and so evolves intelligence, both human and artificial, and none of it requires a little homunculus to observe it in order to make it real.

Consider a computer with a program and initial state loaded into it’s memory. It executes and completes the program leaving the computer memory in it’s final state.

How are you able to determine the semantics of these 1’s and 0’s?

You say you can do it based on the syntactical operations that are performed but I don’t see how it can be done.
Here is an example, you tell me the meaning/semantics:
Beginning state:
Byte #1=0
Byte #2=0

Step 1=Add 1 to byte #1
Byte #1=1
Byte #2=0

Step 2=Add 1 to byte #2
Byte #1=1
Byte #2=1

Ending state:
Byte #1=1
Byte #2=1
I know what the meaning is because I wrote the program, but I don’t see how you can figure it out from just that. Unless maybe you think the interpretation/meaning remains at the level of abstract symbol and has no relation to the meaning humans impose on the system when they use it as a tool.

Exactly, you wrote the program, so you know the meaning. Same thing with the brain. It’s not confused about, say, the information coming down the optic nerve.

So, are you just slinging around random fallacies to see if one sticks, or are you going to bother explaining what you think the straw man in my argument is? Because throwing around that kind of garbage otherwise is just going to get your hands dirty.

OK. So you’re arguing that it’s impossible to make progress on a subject, unless you use the correct paradigm; hence, progress on a subject implies that the paradigm used must be correct.

That’s, of course, straightforwardly nonsense. Great strides on understanding the distribution of heat were made using the notion of phlogiston. Kepler predicted the motion of the planets without ever having heard of relativity. And so on: point being, every paradigm used in science at any given point is more likely to be wrong than correct. New paradigms come along to replace the old ones; computationalism replaced identity theory and behaviorism, and there’s no reason at all to suppose that it won’t itself be replaced.

As for ‘progress’, nothing about that article strikes me as terribly novel. Multiple agent-style models of the mind have been around since at least the fifties, with Oliver Selfridge’s ‘pandemonium’-model, Minsky’s ‘society of mind’, and more recently, Dennett’s ‘multiple drafts’-model. All of which, including the one you cite, of course remain highly speculative.

While you arbitrarily exclude the most relevant locus of discourse for this sort of thing, there are of course lots of examples. I have already pointed to the example of Integrated Information Theory in this thread, which straightforwardly entails that computation does not suffice for mind—see, for example, the article in Nature Reviews Neuroscience, or the one in PLoS Computational Biology. (Both are examples of significant progress using a non-computational paradigm, which, according to your criterion, then must mean that this paradigm is right!)

I’ve also already pointed to Penrose’s Orch-OR model (Physics of Life Reviews), which is explicitly hypercomputational, and the similar model due to Bringsjord and Zenzen (Theoretical Computer Science).

An account of computation that is equivalent to the one I’ve given can be found, for instance, in the abstraction/representation theory by Horsman et al (see, e. g., the article in Proceedings of the Royal Society A).

I have pointed out how the simple caricature generalizes to arbitrarily complex systems, since the program only gets compounded by adding further complexity. The virtue of the simple system is that it is fully analyzable; thus, any putative counter could be simply stated using the model system. Yet, no such counter has been forthcoming.

This sort of thing is really the most telling element of this discussion. Nobody really has any arguments, but everybody is sure I’m wrong, and can only resort to invective and ridicule in response.

The argument is perfectly clear. I have demonstrated how one can use the same box to compute different functions, hence, computation is not an objective property of that system; in consequence, since whether a system has a mind or not is an objective property, computation does not suffice for mind. None of your transparent dodges changes anything about that. But, as always, if you point out what definition you think is too vague, I will happily clarify them for you—again and again.

As I have already pointed out, this sort of attack simply does no work at all:

Sorry, but you don’t get to shut me up. You’ve continually merely claimed that computers could do interpretation; the one example you gave, interpreting the output of my box as ‘even’ or ‘odd’, I demonstrated to be erroneous, yet, you’ve continued making that claim, with the only attempt at argument was either appealing to your authority as a computer programmer, or simply loudly declaring it ‘obvious’.

If nothing is interpreting B, then B is not doing any computation. Again, I’ve given the explicit examples; all you would have to do in order to shut me up for good would be to show how a computation can interpret the box such that it only implements one given computation.

That’s impossible, of course; so, you just waffle on.

Again, this sort of thing doesn’t work, since it’s merely on the level of the behavioral (switch flips and lights). Take the steering wheel on an old-fashioned car: turning it causes the wheels to mechanically turn. There’s no sense in which the wheels ‘interpret’ the signals of the steering wheel. But now place a sensor behind the steering wheel that transduces its state into a voltage pattern, and sends that to a servo moving the axle. Is there now any interpretation happening? No, of course not: the voltage pattern physically causes the servo to activate in a certain way. What that pattern means is wholly irrelevant.

This is not a relevantly different example from the mechanical transmission of action in the original car. Anything that happens is just physical cause-and-effect, and once more, a claim that computation is nothing but that destroys computationalism.

No. Interpretation is, as I have pointed out several times, defined as before:

There is no reference to the necessity of mind, and I don’t assume it. The part you bolded is just the observation that minds in fact are capable of interpretation—we do it all the time, after all. This doesn’t entail that only minds are capable of interpreting things; however, in conjunction with my earlier argument that computers can’t interpret themselves due to vicious regress, it follows that minds can do something computers can’t. My argument shows how I can interpret a physical system as computing different functions, and why a computer can’t do that; so, the fact that I can do so—which, again, is just demonstrated by the example of me actually doing so—means that I can do something that no computer can do. There is nothing circular about this.

Exactly, so let’s just point it out once more. You tried to computationally generate an interpretation in post #290, which I showed to just require more interpretation—as, of course, every single attempt to computationally interpret computation must—in post #315.

An infinite number of computations associated to a single system isn’t meaningless in any sense. The notion is generally known as ‘unlimited pancomputationalism’, and if it is true, then computationalism is trivially false. I merely argue for limited pancomputationalism, because there are several restrictions you might want to impose on interpretations you consider ‘valid’—for instance, you only want to count interpretations that associate physical states to binary numbers. This is fine by me: my argument works regardless. But, in principle, you could also directly associate patterns of switch states to, say, decimal numbers—for instance, the pattern (‘up’,‘down’,‘down’,‘up’) could be read as the number 9, without any binary intermediary.

This is just the principle of charity: I allow the strongest possible reading of the arguments of my opponents (that, in fact, only your mappings to binary numbers are ‘allowed’), and still reach my conclusion—that no objective such mapping is possible.

Say, that reminds me, you once more entirely innocently overlooked my prompt to finally actually make your argument count for good:

So, again, let me reiterate: I can actually use the box to compute f. If I’m doing something computational by doing so, then a box should be able to compute only f. If not, computationalism is wrong. For computationalism to have any hope, then, it must be shown how a box could compute only f.

I don’t get why this is so hard for you. All of physics used to be predicated on the assumption that time flows uniformly throughout the universe, that nothing can be in two places at once, and literally an endless litany of such reasonable, but false, assumptions. At any given time, it’s more likely for a paradigm to be wrong than right, its successes notwithstanding.

‘Bits and bytes’ are, of course, already interpreted symbols—the symbols being voltage levels. There is no ‘1’ or ‘0’ anywhere in a computer without interpretation.

Still, charitably, one might interpret this as something like the syntactic theory of computation, which however, few people believe to have any hope at all:

As I said above, it’s not about what the '1’s and '0’s stand for, it’s about what’s interpreted as ‘1’ and ‘0’. A computer doesn’t contain these abstract objects; it contains physical states, or parts in certain physical states, that can be interpreted as ‘1’ and ‘0’, and different such interpretations—as amply demonstrated, and somehow overlooked by you, in my last post—yield different computations.

Your table (and my alternative tables) are exactly the same thing, though: mathematical functions.

And still, of course, this sort of objection doesn’t offer you a way out whatsoever. After all, I actually can use the box to compute f. So in some way, the combined system of me and the box does compute f; and either, it does so computationally—then, contrary to your assertions, a box implementing only f by duplicating whatever I do when I compute f using so is possible. Or, I don’t do so computationally: then, computationalism is false.

Again, the '1’s and '0’s are semantic objects: they are what switch flips and lamp lights are interpreted as representing. It’s simply not the case that ‘switch up’ just is ‘1’ (or ‘0’).

And, can you substantiate any of the above using a concrete example? No, of course not: computations can’t possibly interpret other computations, without being interpreted by something non-computational, on pain of infinite regress.

You’re right that the mind doesn’t require homunculi: it is, after all, not computational.

Why does it have to be unique? I can turn my monitor upside down and thus have a completely different “interpretation” of what my computer is doing. Does that mean my computer isn’t doing computation?

There is so much wrong here that it’s hard to even know where to start.

First of all, it’s manifestly obvious that in your original illustration of the functions f and f’, these are, in fact, mathematical functions, and such they impose semantic assignments on the input and output symbols of the box, whereas my table clearly does not.

You try to skate around this by now claiming, rather disingenuously it seems to me, that even the 1s and 0s in a digital computer are “interpreted”, as if I could wake up one morning and have a completely different view of what my computer is doing! But as a recent paper on CTM put it, “the physical description of a particular computing machine is irrelevant, what matters is a syntactical description of its function”. Which is what my table provides. The interpretation of voltage levels inside a computer or the lights and switches in your box are merely physical specifications that map a syntactic description to its physical states, and have absolutely nothing to do with the debate about semantics in computational theory. Whether you’re confused about this or just refuse to admit it I don’t know.

Your introduction here of the SEP snippet on the “syntactic account” of computation is seriously misleading and just muddying the waters. The syntactic account, simply put, is the proposition that computation never involves semantics. It’s analogous to Searle’s failed Chinese Room argument, which holds that the man inside the room reading and responding to Chinese messages by following a rulebook nevertheless has no understanding of Chinese.

You’ll note that the competing view described immediately prior to that section is the semantic account, which is much more widely held. And rather ironically for your rejection of the computational theory of mind, this is the view held by those who endorse CTM. The semantic account holds that (quoting the article) “computation is the processing of representations” (symbols). Cognitive processes are held to be syntactic operations on mental representations in the same way that digital computers operate on symbols. This is the view I’ve been endorsing from the beginning, and my definition of what a “symbol” is that you quoted should make that abundantly clear. It’s intentionally closely aligned with the description of cognition in CTM. How it led you to conclude the opposite – that I was endorsing “the syntactic theory of computation” – is a total mystery.

Sure I can. Any advanced AI system should be a persuasive example. We’ve already discussed Watson. The Jeopardy question arrives as a string of symbols. The semantics derived from those symbols become obvious just as soon as Watson starts the process of query decomposition and hypothesis generation. No homunculus is in evidence anywhere.

I don’t see what point you’re trying to make, and nothing I said implied that one should always be able to derive the semantics of symbols just by observing the states of a computer.

Take the Watson example I just gave above. It was in response to HMHW’s challenge to my earlier statement that “as computations grow more complex, they themselves endow the symbols with semantics, and so evolves intelligence, both human and artificial, and none of it requires a little homunculus to observe it in order to make it real.” The Watson example illustrates that the symbols of the Jeopardy question that make up the English sentence, stored in Watson’s memory as a series of bytes, acquire semantics as Watson proceeds through the analytical process. Simply put, the machine’s actions demonstrate that it actually understands the question in the same way that a human would, its actions reflecting what the sentence actually means.

Searle would use the Chinese Room argument to claim that it really doesn’t, and that what it’s doing is really just very fancy symbol processing that creates the illusion of understanding. My argument is that there’s no difference, and moreover that human cognition is symbol processing. The Chinese Room argument is supposed to impress us with the fact that the man in the room is successfully responding to Chinese messages while having no understanding of Chinese. But the man is not the point here. The man doesn’t understand Chinese in the same sense that a computer with no software doesn’t understand Chinese. But the system (the room and everything in it) does. In the same way, the Watson computer hardware doesn’t understand the Jeopardy question, but all of the algorithms, heuristics, and databases working together do. There is no other meaningful concept of “understanding”, or of intelligence. One might say that this is the ultimate conclusion of the semantic account of computation. So computers can be genuinely intelligent, and conversely, human cognition can be computational without creating the sort of paradox that HMHW seems to think it does.

Question for you: what happens if two observers look at the box at the same time, and conclude that it has different interpretations? Does it explode into flames? I bet it explodes into flames.

It had better explode into flames, because your argument doesn’t work if it doesn’t.

But let’s pretend for a moment that it doesn’t explode into flames. When you have two observers looking at the box simultaneously, the magic of interpretation causes it to be computing both f and f’ at the same time. I realize that this is a shocking thought, but it naturally follows if things can have multiple people looking at them at the same time without exploding.

Now let’s talk about sentience for a moment - or as it’s also known, self-awareness. By its inherent nature, self-awareness is observing itself at all times, due to the whole “aware of self” thing. It’s also clearly interpreting its view of itself as being a sentient entity.

This means that if you have somebody looking at a self-aware computation and interpreting it as computing f’ instead, that doesn’t stop the self-aware computation from interpreting itself as being self-aware. The self-aware computation’s self-observation is self-sustaining, meaning its existence is not dependent on or impeded by any subjective outside observer. Meaning that the mind within the computational device has objective existence, which means that your argument apparently can’t disprove it.

Thus your argument fails to prove that computational devices are unable to sustain minds - unless you consider all art galleries giant bombs, with all their paintings exploding the moment two people look at them and variably interpret them at the same time.

(I think this will do for now; let’s keep things simple. I look forward to hearing how you transparently dodge this one.)

I was asking about this statement:

I’m trying to understand how the syntactical operations that are performed on symbols establish the relationship to meaningful things.

How do you connect the syntactic operations in my program above to meaningful things?

I purposefully chose a simple program to make this process easier to walk through.