Downloading Your Consciousness Just Before Death.

Because… I wrote them down… and they were different…?

Then just humor me. Write down the computation performed by the box.

Got a link? The only Moore’s Theorem I found is on topology and is not obviously relevant. Though of course you could construct an unlimited number of equivalent circuits with the same I/O behavior - which is kind of my point. However what I was getting at is that for a state machine with an unknown number of states you might not be able to construct any with just I/O behavior.

By Greek and Hebrew letters I was referring to the fact that numerals are expressed as letters in those languages. Just like Roman numerals. Any translator will tell you that there is no 1-1 mapping from English to Hebrew and vice versa, so that point isn’t relevant.

There can be a single thing it means - finding it is a different mater. If it were generated by a code, then decoding it would provide that meaning. But you’d need context to be sure. The hieroglyphics on the Rosetta Stone would never have been translated if the Greek were not available.
On the other hand the hieroglyphics Joseph Smith “translated” were found to be not the true meaning when the original was discovered and translated using our modern knowledge of hieroglyphics.

And back to the difference between computation and interpretation, which you didn’t really address above. Writing out “gift” is like the computation, but the interpretation depends on semantics. Look it up in the dictionary and you see the word is overloaded, being a noun and a verb and having differing meanings even as a noun.
Yes, the output of a computation can be interpreted in different ways, but so can our speech and our writing. Say a post-modernist writer uses dice to construct a short story from lists of words. I bet five readers will interpret that story in five different ways, none of them “correct.”

My son-in-law is German but can now think in English. I don’t think his neurons have changed. Similarly a computer can “think” using ASCII characters or characters with a different coding, where “think” in this sense is symbol manipulation. But the underlying hardware remains the same.

There can be several computations for the same and equivalent functions. I don’t know what you mean by two computations being the same.
Say you repeat running a program which involves dynamic memory allocation. If you look at a detailed machine language level trace of that program registers and memory locations for variables in that program may differ between runs. Is this the same or different computations? I’d have to hope that you agree that the computation, however you define it, is computing the same function in this case.

No, that’s not what you wrote down. What you wrote down were an arbitrary two of an infinite number of possible interpretations of the switch and light patterns. Your tables are different because they embody the arbitrary semantics of the different interpretations. Your argument is manifestly circular. The tables defining the actual transformations are identical.

It would be a table of all possible switch positions, and the light pattern that is produced by each combination. Note that this table is objective and independent of interpretation, taking into account only the computational properties of the box. Substitute symbols instead of switches and lights, and this can be represented by a classic Turing machine.

They are also perfectly sensible and distinct computations (as in, there exist different TMs realizing them) that can be performed using the box—in exactly the same way we use a calculator to perform arithmetic. Arithmetic after all involves operations on numbers, not on the LEDs of its 7-segment display, which are related to the numbers in the same way the lights on my box are.

This is then exactly the physical evolution of the system, and your ‘computationalism’ collapses onto identity physicalism.

Provided one interprets the symbols in the right way, of course…

Ok. I’m an engineer who’s just finished a master’s in computer science/Machine learning. I’m quite impatient to see philosophical arguments as all I really care about is how to use the pieces I know about to do new tasks.

So I will apologize in that I have not read all of your posts in this thread, and I have not fully analyzed what you mean.

But I’ve got a question for you. A practical, rubber meets the road question. You too, wolfpup.

We use an algorithm called divide and conquer on this little problem of brain emulation.

Specifically, we divide the brain to the simplest case, a synapse. We know all or nothing electrical signals come in, and all or nothing electrical signals leave.

We know the information carried in primarily in just timing. That is, if a pulse leaves, the exact time it leaves carries information to other sub-components in the system.

We study the system and determine there’s a gaussian function of randomness in each real synapse - the output seems to be F1(Rules, Input, State, Noise). There is a second internal output where State_new = F2(Rules, Input, State_previous, Noise).

“Noise” we just use some mathematical function (probably gaussian but I won’t be averse to using other functions if they fit better) to replace the thousands of subtle biochemical details that sum to random noise overall, allowing for a simpler (and cheaper) model and thus requiring cheaper computer hardware to run.

The rules we can deduce by building a model by studying each synapse in laboratory and living animal models. (we genetically modify the animals to use *exactly *the same type of synapses the human brain uses)

The inputs are immediate time things. The State is something we can determine by examining a synapse with sufficient resolution.

Anyways, a whole brain is just a combination of (trillions) of these subproblems. Each subproblem is just timed electrical pulses. Anything like “consciousness” has to be emergent behavior from higher level systems.

And, who cares how it works. We know if physical reality follows the same rules inside a brain that it does outside, and you duplicate the subproblems (you solve each subproblem), you solve the overall problem. (duplicating the behavior, including complex internal perceived behavior like consciousness, of a complex machine like a brain)

For me to care how it actually works* you need to prove that I can’t subdivide the system into tiny subproblems where these philosophical problems that both you and wolfpup talk about don’t matter.

*sure, once you have working, conscious brain emulations in hardware that can be paused, where you can inject and copy digital values from specific areas, and so on, scientists of the far future will surely be able to work out how it all actually works.

Thought and behavior appear to be more or less computational. I see a bear in the woods and natural think, “avoid” or in some circumstances, “kill”, and act accordingly. That looks a lot like a conditional calculation based on a complex resolution of stored symbols. Similarly, figuring out how long it will take to cross this ten mile wide desert valley and whether I have the resources (water, fuel, whatever) to do it is pretty obviously computational. Animals seem to display similar capabilities.

But what is consciousness relative to that? Behavior seems to define our personalities, but is that the same thing as our consciousness? I know that my personality and behavior patterns have changed over the decades, but has my consciousness?

I seem to be the same person, in here, that I was in high school, even though many of my thoughts, responses and actions have changed. Biological computation is adaptive by nature, but consciousness appears to be contiguous and impervious to change.

Not only are descriptors like “perfectly sensible” and “distinct” begging the question (in the literal sense of presupposing the conclusion) , but “different TMs” is flat-out wrong because it’s obvious that the same symbol manipulations are occurring in both cases, which has been exactly my point all along.

Additionally, in your calculator example, you seem to be implying that if the display is defective, or accidentally mounted upside down (and the user isn’t bright enough to turn the calculator the other way) the calculator is performing a fundamentally different computation than one with a normal display. You do see how ridiculous this is, right?

That isn’t a “collapse”; in my view, it’s a fundamental truth.

Do you see any contradiction with your previous claim that Turing machines don’t require interpretation, to wit:

It’s from his Gedanken-Experiments on Sequential Machines, which introduced Moore automata, which you’re no doubt familiar with. The theorem is (actually, the theorems are) that no experiment (providing inputs and observing outputs) can generally determine what state a given machine was in at the start of the experiment, and furthermore, that for every sequence of experiments on a certain machine, a different machine exists that would have provided the same outcomes.

Not on the level of individual words, but on the level of sentences, sure. But no matter: we can imagine a language with a 1:1 mapping to English such that an intelligible text in English to you will be an intelligible text saying something else in that language to a speaker of it.

There can’t be, no. Otherwise, one-time pads could be cracked (by brute force if need be).

That is exactly my point. A computation can be considered the same sort of thing as the meaning of a text—after all, a computation really is just a kind of description, even if perhaps a compressed one (see my above argument against simulation hypotheses).

They surely must have—they must change with everything new that we learn, otherwise, we’d have a failure of the mental to supervene on the physical, and physicalism would be false.

That thinking is symbol manipulation is exactly the thesis computationalism seeks to demonstrate, and I believe is false.

As I said above, I just mean that they’re the same partial function (equivalently, the same TM).

I don’t really have to ‘prove’ it in the general case, just exhibit a special one where it’s wrong. Which is readily done: on integrated information theory (IIT), consciousness is exactly provided by that amount of information about the system you lose if you just consider its individual components. So if you agree that the view is at least possible, slicing up the system into sub-components and considering them independently will exactly lose sight of what’s interesting to us.

I don’t really think IIT is right, however (although it does make an interesting example against which to test one’s views). So let’s suppose that what you’re saying is true (I believe, ultimately, it is): you can just break down the problem into manageable sub-problems, and solve those. Say you replicate the behavior of individual synapses, neurons, and the like.

The problem is, though, that while that means you can duplicate their behavior, this doesn’t straightforwardly entail that you understand how consciousness is generated. While I don’t hold that philosophical zombies are metaphysically possible, I do think it’s a coherent idea; but then, the mere behavior may tell us nothing about conscious experience.

Make no mistake, I don’t think there’s any magic sauce to consciousness that can’t be reduced to the physical. But I want to know how that reduction goes; and I think to find that out, we need to be honest about the problems involved, rather than hiding them behind vague notions of emergence and complexity and the like.

OK. So where you earlier claimed that it suffices to individuate computations to note that they have different input/output behavior (even for a single case), i. e.:

Now, you claim that TMs that manifestly show different outputs given the same inputs are ‘the same’, and indeed, that, for example, starting with an tape showing (1, 3) and ending up showing (4) is ‘obviously’ the same symbol manipulation as ending up showing (5), instead.

I’m sorry, but I can’t make heads or tails of that.

This is your claim:

Consequently, once the mapping to outputs changes, the computation being performed changes. If my box shows different lights, it’s your position that it would implement a different computation—only input/output behavior is relevant, and the output behavior has changed.

On my construal, that’s in fact not the case. As computation is interpretational anyway, it’s not at all a problem to continue to interpret, say, a 7-segment display as displaying an ‘8’ when it displays



 __
|__|
|  |,


because the lower LED gave out. On your position, because the mapping now yields a different output—outputs after all just being LED patterns—, that’s a different computation.

That would be somewhat ironic, at least. Let me just quote some relevant passages from Jaegwon Kim’s Philosophy of Mind (which I heartily recommend, and which is widely considered one of the best introductory texts on the matter):

So, as you see, the collapse of computationalism to identity theory would throw back the philosophy of mind to a position not held by many since the 70s, and certainly not held by the computationalists of today.

No, of course not. Whatever symbols the Turing machine deals with are certainly not the inputs and outputs of my box, which are switch positions and lamp states. So the TM is perfectly definite as taking, say, ‘up, up, down, up’ to ‘on, off, off’, but it is a matter of interpretation—involving, say, English language competence—that the switch and lamp states are captured by this. A switch being up and the word ‘up’ are two very different things, just like a dog and the word ‘dog’.

I can’t make heads or tails out of how you reached that conclusion (the claim that computations producing different outputs are the same) from anything I said, either in your quotes or anywhere else. I’ve been very clear from the beginning that the mapping of input symbols to output symbols is what defines a computation, and I’ve never said anything different. Any such reading would be a misinterpretation, but I don’t see anything in those quotes that could be read that way.

If you’re referring to my response to your calculator digression, first of all it seemed to me you were implying that a defective LED display somehow changed the nature of the computation, but on review, I don’t think you were, so I withdraw my criticism. The important point here is that I certainly am not saying that, either, and trying to pretend that I am is a particularly egregious argumentative sleight-of-hand.

To expand on that more fully, we need to keep in mind what a “symbol” is; FTR, I defined it here, and let me repeat the key part: a “symbol” is a token – an abstract unit of information – that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. It’s a logical abstraction, not a physical thing, which takes different forms in different contexts and has corresponding physical instantiations. In a calculator, the input and output symbols are the numerical digits. They are not the segments of a LED. So in my descriptive model any given computation is the same regardless of whether one or more LED segments are defective, because the (abstract) symbols being logically output are the same. Unlike your description, I don’t need an “interpreter” to make it so.

Speaking of irony, I must point out first of all how deeply ironic it is that you’re trying to support your argument with a cite offering boundless praise for the work of Hilary Putnam in establishing the computational theory of mind when you just finished telling us over here that it’s a worthless theory that he subsequently “dismantled”!

I think the issue here is disagreement over terms of art, and specifically what you variously refer to as “identity physicalism” and “identity theory”. AIUI, Putnam rejected type-identity physicalism which holds that particular types of mental states are categorically correlated with specific brain events, and endorsed instead a sort of token-identity physicalism which implies that different species can experience similar mental states in different physical ways, which led to the important idea of multiple realizability that I mentioned very early on in this discussion. It’s important not just as a foundational idea in theories of cognition, but because it offers at least the prospect that the entirety of the mind could be realized on a digital computer.

My concept of physicalism is simply that everything about the mind has a corresponding physical instantiation, including emergent properties like consciousness. There may not be a specific “where” associated with it, but it exists as a holistic quality of the physical brain. I reject Chalmers’ notion that its quality either must be visible in the underlying components (which is a vague and ultimately meaningless criterion) or it cannot have a physical basis (which is an absurdity that invokes mysticism). And, obviously, I reject the idea that computation is in any way subjective and in need of interpretation, I reject the silly homunculus fallacy, and my views on that have been completely consistent throughout this discussion.

Again, it is true that you can’t always find the state diagram from experiments, but you often can. And the experiments find the minimal state machine - it is true that there are lots of equivalent state machines, and you can’t distinguish them from the minimum one by definition.

I’m no translator, but I sincerely doubt there is a 1:1 mapping on sentences, or words, or complete works. Hell, important parts of Christianity are based on improper translations from Hebrew to Greek. I expect computers to be able to do translations, better than today, but I don’t expect them to do it perfectly, since people can’t today.

A computation is a process. Unless you say that descriptions are equivalent to processes, they are not the same, and we generally don’t equate them, since how do we know that the process produces the correct thing.

They have changed in the sense that our neurons also change when we remember something. Are we the same people after getting a new memory? If not, physicalism is true since our personalities map to our physical structure. I don’t think so, since a process run on different data and creating different data is the same process.

I put thinking in quotes to not beg the question. I’m not saying symbol manipulation is thinking, just that whatever thinking is should be the same no matter what symbols are involved.

That doesn’t really answer the question, since you could have two TMs computing the same partial function, in that TM2 could write and then erase stuff on its tape and produce the same output as TM1. Are they doing the same computation?

How is a description not a process? You take a thing, which may be physical, abstract or emotional, and you convert it to words. The words are then received by another party and reimagined. That sounds exactly like a process to me.

Here’s the deal. I’ve seen a lot of weird problem with electronics in my career. (I’ve worked in industry 5 years now, the Master’s is a part time thing). And while I come up with theories as to the root cause, ultimately, about half the time every theory I have ends up being incorrect. What I end up having to do is set up an experiment, really - make the machine print a log at the moment of failure or set a pin high when it takes a particular code path or some other definitive result - and gradually narrow down where the problem could be.

Eventually I eliminate all the possibilities of what it can’t be and I find the smoking gun.

My feeling is that with consciousness, neurocientists may need a lot more stuff than they have had access to to date. Kind of how subatomic particles couldn’t really be found until particle accelerators and their high resolution collision detectors were available to show what happens.

Such as complete digital emulations of sections of human cortex, simulated environments, various machine learning algorithms that use unsupervised learning to find the underlying patterns and explore them.

So it’s nice to speculate but trying to figure out consciousness now seems like trying to figure out the Linux operating system (if we didn’t have source code) when all we have are incomplete assembly language dumps of the system when it’s running, and we can only see a tiny fraction of the address space at any given time.

Creating a description is a process, but the description itself is static and isn’t a process.
Printed code is a description of the code. It isn’t a process until it is compiled and executed. And of course taking the code from a file and sending it to a printer is a process also.

Not to mention we have only a partial description of the architecture of the computer on which it is running, half of which is incorrect.
I predict you are going to have fun debugging hardware. I worked on that for 37 years. You haven’t lived until you go to a meeting a two a week for a year, run by a VP, on why our chip was dying mysteriously.

Well, I simply don’t know how else to interpret your claim that my functions f and f’—which as I’ve explicitly written down map the same inputs to different outputs, and which you could encode into machine tables for different TMs—are the same, in any way, sense, or form.

I mean, I fell kinda silly, but here are the functions again:



x1 |  x2   ||   f(x1, x2)
-----------------------
0  |  0    ||       0
0  |  1    ||       1
0  |  2    ||       2
0  |  3    ||       3
0  |  0    ||       0
1  |  1    ||       2
1  |  2    ||       3
1  |  3    ||       4
2  |  0    ||       2
2  |  1    ||       3
2  |  2    ||       4
2  |  3    ||       5
3  |  0    ||       3
3  |  1    ||       4
3  |  2    ||       5
3  |  3    ||       6




x1 |  x2   ||  f'(x1, x2)
-----------------------
0 |   0    ||       0
0 |   2    ||       4
0 |   1    ||       2
0 |   3    ||       6
2 |   0    ||       4
2 |   2    ||       2
2 |   1    ||       6
2 |   3    ||       1
1 |   0    ||       2
1 |   2    ||       6
1 |   1    ||       1
1 |   3    ||       5
3 |   0    ||       6
3 |   2    ||       1
3 |   1    ||       5
3 |   3    ||       3


So the above functions are manifestly different mappings of input symbols to output symbols; yet, you claim them to be the same.

But a change in the LEDs implies a change in the digits. A defect could change a 9 into a 3, for example.

Without an interpreter, there are no abstract symbols being output, only physical states of the system—i. e. LED patterns.

I was also praising him there (in part, for his intellectual honesty in admitting his earlier mistake), so where do you think any ‘irony’ lurks?

No, I don’t think this is right. Putnam was very explicitly proposing functionalism, which is distinct from token-identity physicalism (which only came about somewhat later). Besides, token-identity theory doesn’t actually help with multiple realizability: a pain is a token of a given type (mental state), and, on multiple realizability, is realized by tokens of distinct types; so since those latter tokens are not identical, neither can the pain-token be to either of them. (I. e. if a pain-token is identical to a neural activation-token—even if the type of neural activation is not identical with the type of pain—, then that pain-token can’t be identical to a silicon chip voltage configuration-token, since the silicon chip voltage configuration-token isn’t identical to the neural activation-token.)

In any case, token-identity physicalism is still a very different view from computationalist functionalism.

What about a movie, then. Is that a process? I would say it’s a description, or perhaps, depiction—with a written description likewise being a kind of depiction.

You might want to hold that in a movie, the frames aren’t logically connected to one another, but, as in my argument above, that ceases to be true once you compress the movie. So do we run the danger of creating a conscious brain by sufficiently compressing the movie taken of a subject’s brain activity?

I think that would be an absurd consequence. So I think that computations really are just descriptions, as well—highly efficient descriptions, perhaps description schemes, such that you can use one scheme with different initial data (‘key frames’) to produce descriptions about different systems; but still, not in any sense more real or fundamentally different than just a written description. And equally unlikely to ever give rise to a conscious mind, or a real universe; and like other descriptions, always subject to interpretation, and only intelligible to those capable of interpreting them.

It’s a minor point, but that mere fact doesn’t suffice for physicalism to be true. On things like neutral monism and dual-aspect theories, not to mention panpsychism, you’ll still have a one-to-one correspondence between physical states and states of mind, but the consciousness isn’t due to the physical facts about a system.

Sure; they’d follow a different method to implement it, but it’s really only the result that counts. Again, if I want to compute the square root, then any method that ends with me knowing the square root will perfectly suffice to do that.

The difference being, that large stuff is composed of smaller stuff is an idea that’s been around for thousands of years, and while the details prove tricky, the basic picture is clear. That’s not the case with consciousness: nobody even has a plausible story how consciousness could come about; we don’t even know what that kind of explanation would look like. I can’t think of any other problem where that’s the case.

Even in that case, while you don’t know the answer, you know what it’ll look like, and how to get there. But that’s exactly what we’re struggling with right now regarding consciousness, so nobody really has any idea whether things like brain emulation and the like will get us anywhere nearer to figuring it out. It’s still a good thing to try, of course, but we mustn’t kid ourselves that it’s anything but a shot in the dark.

Hmm, no. Token physicalism isn’t threatened by multiple realizability, if you don’t require reductionism.

Yes, the content of description itself is static. But as some words, or bytes, it is inert data. Then it is interpreted by a second party (or perhaps by the original creator from their own notebook). That is another process.

In other words, “description” means information transferred between parties, or, in other words, dual processes. You may not know what it is (what it describes, or even that it is in fact a description) until you yourself process it.

OK, thanks, now I understand what you were referring to, as outlandish as it is. But this is the very thing you said before, and which I already refuted over in #263. At first I thought you hadn’t seen it, but you did respond to it in the next post. Your response was to say that the tables you wrote down were different because the computations were different, which begs the question by presupposing the truth of what the question is supposed to be asking, and does absolutely nothing to advance your argument.

In my rebuttal of this circular argument, I said (in #263) that your tables are different because they embody the arbitrary semantics of the different interpretations. That such arbitrary interpretations are possible is not in dispute; what is in dispute is whether they all represent exactly equivalent computations. And that’s not a hard question to resolve.

We resolve it by asking how many such arbitrary interpretations there can be, and we observe that it’s not just two, but (by the simple expedient of arbitrary minor tweaks of what each bit is taken to mean, as you did in your f’ function) we find that there are in fact an infinite number of such possible interpretations. A person might naturally gravitate to the simple binary arithmetic interpretation as the most intuitive one, but as you yourself would point out – having contrived the f’ function – no one of these infinite number of interpretations is any more intrinsically valid than any other.

So if each interpretation is indeed a distinct computation, this amazing box is in fact performing an infinite number of computations, and it’s doing all of them simultaneously! That is an amazing box indeed, and clearly an absurdity.

My position is simply that the box is performing only one computation, and as such, it can be represented by just one TM, or just one table of input-output mappings.

And to be perfectly clear, this is that table:



 S11 | S12 | S21 | S22  || L1 | L2 | L3
---------------------------------------
  0  |  0  |  0  |  0   ||  0 |  0 |  0
  0  |  1  |  0  |  0   ||  0 |  0 |  1
  1  |  0  |  0  |  0   ||  0 |  1 |  0
  1  |  1  |  0  |  0   ||  0 |  1 |  1
  0  |  0  |  0  |  1   ||  0 |  0 |  1
  0  |  1  |  0  |  1   ||  0 |  1 |  0
  1  |  0  |  0  |  1   ||  0 |  1 |  1
  1  |  1  |  0  |  1   ||  1 |  0 |  0
  0  |  0  |  1  |  0   ||  0 |  1 |  0
  0  |  1  |  1  |  0   ||  0 |  1 |  1
  1  |  0  |  1  |  0   ||  1 |  0 |  0
  1  |  1  |  1  |  0   ||  1 |  0 |  1
  0  |  0  |  1  |  1   ||  0 |  1 |  1
  0  |  1  |  1  |  1   ||  1 |  0 |  0
  1  |  0  |  1  |  1   ||  1 |  0 |  1
  1  |  1  |  1  |  1   ||  1 |  1 |  0


No, it’s actually the other way around. You’re the one who claimed earlier that a Turing machine requires no interpreter since its symbols are abstractions. So is my computational model of the calculator. It is you who, by requiring an interpreter, potentially changes the nature of the output symbols, and so according to your own definition (not mine!) if LED segments fail or light up incorrectly, and the wrong symbols are perceived, according to your reasoning the very nature of the computation changes, because you’ve made your interpreter an intrinsic part of the computational process!

At this point I think unless you have an answer to the reductio ad absurdum in the first part of my post, this line of argument is futile because if you want to conclude that it supports “identity physicalism”, then I say, so be it. If this is a consequence of the conclusion that the nature of the computation resides entirely within the physical box, then it’s a consequence we have to deal with.

It sounds like you are saying that computation is the transformation of inputs to outputs regardless of the interpretation or meaning of any of the symbols, correct? (note: this is in line with what I have seen presented in other places)

Assuming you agree with that, onto the next question:
Given that computations can be interpreted infinitely many ways, does consciousness arise only with some of those interpretations? Or are all interpretations conscious, including the tornado simulation?