What is Consciousness and can it be preserved?

This isn’t really a response just to your post, as others have expressed similar thoughts. Consider this:

Several posters have touched on quantum mechanics-related theories. In quantum mechanics there is something called quantum entanglement (spooky action at a distance) This has been experimentally verified as an actual phenomenon. However, no one can explain the underlying mechanism. This is an example of something that actually exists independent of your understanding of how it works. This counters all of the arguments about “if there were such a mechanism we would be able to detect it”.

My only purpose in interjecting this notion of the mechanism of consciousness not being entirely contained in the brain, is to expand the scope of possible explanations. The real difficulty in all of this is the fact that you are using a brain to try to understand a brain–or possibly even things that transcend brains.

Proposing that something might exist because we haven’t the proper tools to measure it? Using that logic, I propose that grass is helped along by intangible gnomes that push it up from underneath. Is this something that we should talk about/look into/ scientifically investigate?

I agree with pretty much everything you wrote in this post; what I disagreed with was your claim that we know the brain to be a machine. I don’t think we know that anymore than we know it to be a computer.

How so? As you say, we have experimentally verified, i.e. detected, entanglement. So the example ought to strengthen the case, given in particular how alien entanglement is to our everyday concepts. And besides, quantum entanglement is very well understood: it’s the superposition of correlated states of a multipartite system; that states of a physical system may enter into superpositions seems intuitively strange to us, but that’s just a problem of the concepts we’re used to. Furthermore, entanglement can’t be used to transmit information, so the analogy doesn’t really help you. And besides, the problem I noted was not mainly that we ought to be able to detect these signals, but rather, that no matter whether or not they are physical, each case has problems that the setting without ‘radio’ faces as well, and thus, it doesn’t help us at all.

Continuity of consciousness is an illusion created by memory. [IMHO – I agree there’s no proof. There never will be, but as machines begin to act like other conscious entities, we’ll learn to treat them that way, and only philosophers and religious nuts will argue the point.]

The conclusion does not follow, if consciousness is created by the information processing, unlike water. An important issue here is what we mean by “simulation”. If by that we mean a highly abstract simulation, then few would posit consciousness on the part of the simulation. But if by that we mean emulation where all of the same information flows are essentially duplicated, then many would posit consciousness on the part of the emulation, and I challenge anyone who does not posit consciousness, to define what consciousness is and what it is that the emulation doesn’t have that the thing emulated does.

My sentiments exactly.

A fun thought experiment here is the classic teleporter accident. Normally, your atoms are disintegrated and beamed elsewhere, where they’re reassembled to match. Of course, an improvement in technology only beams the information – after all, we change 90% of our atoms every 10 years anyway, so obviously atoms aren’t “us”.

Except for McCoy and a few others, we all use the teleporter happily for years, until the accident, where the source body doesn’t get de-materialized. (Or, two copies get reassembled. IMHO there’s no difference, though you may disagree.)

After the accident, there are two of you! Goodness knows we can’t just kill either one of you and ignore it, that would be murder. So, what happened here?

My opinion is that when the first experiences by the two copies differ, that creates two individuals. Before that, there was only one individual, and there’s no difference (from the standpoint of “who is me?”) between teleportation and going to sleep and waking up. In both cases there’s a discontinuity, and memory bridges the gap.

I also believe we all know exactly what it’s like to be dead. It’s just like when we’re unconscious.

oops.

…only longer.

More pertinently, ‘wetness’ is not an attribute of the hurricane, or of water—it’s an attribute of our experience. By requesting a simulation of a hurricane to give rise to the attribute of wetness, one expects the simulation to include phenomenal properties, but that possibility is exactly what’s at stake in the discussion.

But in your setup, that’s exactly what’s being done during ordinary, correctly functioning teleportation.

I swear that said something else when I hit reply…

Good point! (lol)

Not if dematerialization happens fast enough to avoid any sensation on the part of the person being transported.

There’s another quibble, too. Even if there are sensations from during dematerialization that don’t get preserved after transport, this is no different from a colonoscopy, where I get experiences that I can’t remember thanks to the drugs involved. It’s only when there are two copies with two different experiences, where I draw the line (creating two individuals).

Yes, due to sloppy reading on my part, reading “general-purpose programmable computer” where you actually said “geneneral-purpose programmable machine”. I don’t know what the latter is, so I’ll refrain from commenting.

Let’s consider this method. Each card or page would have to encode the entire state up to the time it was used. And it would have to anticipate all inputs. If you could do that, you could have the room pass the Turing test by anticipating all question sequences. But to run the room for any decent amount of time you’d have an incredible combinatorial explosion, and you’d soon need more cards than atoms in the universe. If he had made that clear, the paper would have been laughed right out of the review process.

I don’t recall that being allowed. The point of the analogy was to present a scenario where intelligent behavior was supposedly accomplished by trivial means - a sequence of cards. Once you allow the room to be a general purpose computer, you are begging the question. State changes dictated by a pre-enumerated sequence runs into the same problem as above.
He betrays a great ignorance about what “telling a computer to do something” means. He appears to see a computer as a big, fast adding machine, which was pretty common back then. But computers can learn, and learning would seem to be an absolute requirement of intelligence. His original room can’t learn.
His argument is:
AI proponents claim that computers can be intelligent.
My model of a computer is the Chinese Room
It is obvious that the Chinese Room, even if it could simulate intelligence, is not intelligent.
Thus, proponents of AI are incorrect.

But his model of a computer is totally broken - and can’t even simulate a Turing machine.

Right - but the original model is a big lookup table. Which is why the model is trivially incorrect.

As I said earlier, the ‘look-up table’ would need to be almost as big as The Library of Babel, in fact it would have to be exactly that big (i.e. infinite) to accommodate any random input.

So it’s not murder if you don’t notice?

You can read it as computer, as I still don’t see the relevant distinction.

That’s not really the gist of the argument. I think a better formulation is the following: let’s say we have a computer program (of any sort whatsoever) that is able to carry out a conversation in Chinese. The question is: does this mean that the computer understands Chinese?

Searle’s answer is to give this program, in the form of a rule book and scrap paper to do computations on and whatever else may be needed to implement the symbol-manipulations necessary, to a man capable only of speaking English. Using the rule book, the man will be able to carry on a conversation in Chinese, because that’s what the program does. But, according to most people’s intuition, this does not imply that the man understands Chinese. But then, neither can we conclude that the computer implementing this program did, and thus, mere rule-following is insufficient for understanding.

Using this, Searle proposes the argument that if there is strong AI, then there exists a program such that a computer could understand Chinese; but since I could implement any program without thereby coming to understand Chinese, there cannot be strong AI.

The claim is not that apparently intelligent behaviour is implemented by trivial means—intelligence has nothing to do with it. The claim is that functional, formal manipulations do not constitute understanding, and what kind of manipulations are carried out is not limited in any way.

:smack: Of course, defining ‘computer’ as ‘general-purpose programmable computer’ is somewhat circular…

In any case, my point was merely that I would not want to require any of these things (general-purpose, programmable, having memory etc.) of a computer, since I can think of examples of machines that lack these characteristics and which I would still want to call ‘computer’. Finite-state automata, for instance, are not general-purpose: there are computational problems they cannot solve (in particular, there is no universal FSA, i.e. a FSA that is able to simulate all other FSAs); they are a strictly weaker model of computation than, for instance, Turing machines. And among the latter, again not all are computationally universal; I don’t actually know the numbers, but I suspect that if you draw a random Turing machine out of the countable set of TMs, you’ll likely not end up with a universal one, but with some simple and probably practically useless one. Consider cellular automata: most of them don’t do anything interesting, except for a special few (Rule 110, Life) which show universality. (And of course, CAs don’t have any memory.)

So, it’s just kind of difficult to come up with a general definition of a computer, and I think it’s best to stick with a computational paradigm, such as Turing machines, and consider anything that can do what Turing machines do a computer.

How can the answer be anything other than yes?

I suppose it depends on the quality of the conversation. That is what the Turing Test is for.

By *apparently *trivial means. The Chinese Room is an example of argument by incredulity. Searle constructs a ridiculously inefficient model of general computation and then relies on people’s intuition that it’s impossible for the system to understand Chinese. But human intuition is notoriously unreliable when it comes to very large numbers. And, in fact, the number of operations that the inhabitant of the Chinese Room would need to perform in order to generate one utterance (if he really is helping simulate a system comparable in complexity to the human brain) is so tremendously huge that our intuition is of no use whatsoever.

Rather the computer and the program understanding Chinese.

You see, the thing that understands Chinese is the instructions in the rule book, the man implementing the instructions, and the state of the conversation as jotted down by the man. The man does not understand Chinese - but whoever wrote the rule book certainly does. In fact it is likely that the man has no good sense of the state of the conversation and all the details of the rulebook.
In computer architecture terms, the man is the CPU, the paper is the memory, and the rules are the program. If you had a program which passed an understanding Chinese Turing Test, you would not say that the CPU understands Chinese but rather that the process does. So what the man understands is irrelevant.
Searle’s original formulation, with the cards and no state, creates a situation where there is not much but the man. If as we discussed before the cards would include the right Chinese response to a set of Chinese inputs, then wouldn’t the person who wrote the cards understand Chinese, and thus again the entire system would, even if the man just pulled out the right card. Searle is pulling a fast one, implying that since it is absurd to consider that this system understands Chinese except for a short time, computers, which he seems to think operates in this way, can’t either.

A program which understood Chinese would be far from trivial. Our brains at the deepest level operate by trivial means also - but the interconnection, memory, and complexity of their components allow intelligence. Planaria are not very smart.
I don’t know if Searle said “implement” but if he did there lies the root of his mistake. A CPU - or the man - does not implement the program, it executes the program. The person who designs the room implements the program, and he certainly does know Chinese.
An unfortunately common sf trope was that when you made a computer very big it suddenly became intelligent. That shows the same misunderstanding as Searle has. Raw hardware doesn’t do much of anything by itself, just like the man in the room, without instructions, is going to stand there picking his nose.

Well, is it murder if no one notices…not even the victim…and, in fact, there is no test that can be applied to show that the murder happened? The victim is right here in the courtroom, testifying, “Yes, I’m the same guy. I have all the same memories and everything. See the little scar on my elbow from when my kitten scratched me when I was nine?”

It’s said that a difference which makes no difference is no difference. But this is even worse: no one can show that there is a difference at all.

I’ve never found the “continuity of experience” argument convincing. I go into an elevator. A period of very limited sensation occurs. Then I come out of the elevator. The “continuity” has been interrupted. Okay, not so very significantly; one could unravel a skein of yarn behind the elevator car, down the shaft, to indicate the “sameness” of the overall experience. But, seriously, it is an interruption to my sensual continuity. No scientific test can falsify the claim that I’m a “different person” – the same in all physical ways, just “not really me” – after a short elevator ride. How is it different, really, from a transporter ride?

In the Star Trek universe, after the Transporter has been in general use for a few decades, I’m sure that the debate is still going on…in philosophy classrooms…but the legal issues have all long been settled. Nobody is going to sue to claim their inheritance because "Grand-dad is dead! This guy isn’t really him!

I suggest that this pragmatic legal approach trumps philosophical idealism.

The point is not the efficiency of the computation, but merely that whatever model of computation you use, all it ever does is manipulate strings of symbols according to syntactic rules. But by this—or so Searle argues; I happen to disagree, but I think the issue is far less trivial than you make it out to be—no understanding of the symbols, no semantics, ever arises.

There’s a precursor to Searle’s argument, due to Turing, which makes basically the same point: he considers what he calls a ‘paper machine’, that is, a program written in natural language and implemented by a person. He says that he’s written a chess program for such a machine, which in theory would enable the person executing it to play chess, even if he’s never done so before. Furthermore, the person needn’t even be aware that he’s playing chess: he could be manipulating strings of symbols—e4-e5,
Nf3-Nc6, Bb5-a6 and so on—without realizing what he’s doing at all: the symbols are just syntax, they don’t carry any semantics. Does the person executing this program know how to play chess? Does he, if he memorizes the program?

The Chinese room adds a further layer to this. Here, the symbols, rather than being immediately connected to their referents, the chess pieces, represent elements of a much more complex structure, that of natural language, where meaning is often very abstract. The structure of the argument is the same, however: all any program ever does is shuffle symbols around, now matter how sophisticated it may be. That’s why Searle doesn’t spend a great deal of time elaborating on different models of computation, different implementations, etc.—he knows he doesn’t have to. Additional layers of complexity can only serve to obfuscate the key point, causing people to convince themselves that maybe if we don’t just have a lookup table or whatever else they’re imagining, but instead a state-of-the-art PAC-learning fuzzy logic genetic algorithm, somehow understanding emerges somewhere (which strikes me rather more like the bad SF trope Voyager mentioned). But to argue like this is, IMHO, to miss the point of the argument, which is precisely that it is independent of these considerations—that all the manipulations a program carries out, no matter how sophisticated, ultimately reduce to syntactic operations.

In a sense, the argument is just this: Take the string 101asfon10ß1054lkan1. What does it mean?

Yes, this is the ‘systems reply’, and it’s a very popular strategy to rebuke Searle’s argument. Against this, Searle has proposed that the man could then just ‘incorporate’ all the parts of the system—learn the rules by heart, do the manipulations internally, and so on. Of course, this is practically impossible, but that doesn’t rob the argument of its force—Searle only needs to establish that if there were a man capable of executing the program in this manner, he still wouldn’t understand Chinese. (It’s perhaps a little easier to swallow with the ‘paper machine’ chess player above.)

I think here is a possibility for a response: if the man consciously implements the program as a whole, then expecting him to understand Chinese is really a level confusion—the ‘understanding’ does not happen on the level of the program’s execution, but on a higher level emergent from that. If he consciously implements a program that gives rise to a mind, then his mind would be different from the mind that he ‘simulates’, and thus, one should not wonder that he fails to understand Chinese. But while this argument may serve to somewhat weaken Searle’s, the underlying question—how to get semantics from syntax—is not touched at all; in order to make this argument, one needs to assume that it’s somehow possible. There are plausibility arguments for this, or for the more general idea that the brain can be replaced by a computational structure and still give rise to a mind—Chalmers’ fading and dancing qualia arguments, notably—but at least I have never seen any argument purporting to establish an actual procedure how to solve this difficulty.

This may be an issue of terminology—in philosophy, implementation is generally the relation between a physical computing system and that which it computes (cf. Chalmers’ paper On Implementing a Computation, which discusses a way to rebuke the allegations of triviality raised by Searle (and Putnam) against this relation, that is, the argument that any physical system can be seen to implement any computation whatsoever).

As I said above, this strikes me as rather more similar to the line you seem to be taken: a simple program does not produce understanding, but make it complex enough—by whatever means—and suddenly, it does. But as I said, I think this misunderstands Searle’s argument, which is wholly independent of the specific kind and realization of program used.

You will always be you with or without you body and brain. You are eternal and you don’t need a body to be alive.

I don’t think backing up a brain is sufficient for preserving “you”. I think your body plays a big, big role. We already know that screwing around with hormones in medication can seriously screw with people’s heads (see some of Una’s posts in recent trans threads about how some people might switch sexuality after/while taking hormone treatments). Obviously if I lop off your arm, you’re still meaningfully “you” (unless you die from blood loss), but it seems a lot more complicated than just your brain.

It’s messy, I mean, castrating somebody will mess up their hormones a bit, but even if their personality has changed a little it’s still clearly “them” (or perhaps a better/less gruesome example is the hormonal shifts during pregnancy). I mean, I’m willing to be proven wrong, maybe a brain in a jar would just end up acting like a person with muted emotions – but I suspect that the person will be nigh unrecognizable if you simultaneously remove every piece of the endocrine system not in the brain (especially in addition to removing or radically altering things like muscle memory and sensory input depending on the apparatus used for this experiment).