What is Consciousness and can it be preserved?

If that’s how high the bar is for you to listen to others, then it looks like you don’t listen to anybody. Does Chopra have this ability, by the way?

You seem to be fixated on making this about Chopra. It doesn’t matter where I encountered the idea.

And yes, that’s how high the bar is for me to accept someone’s pronouncement about the absolute possibility or impossibility of something as profound as the nature/origin/structure of human consciousness. I am absolutely open to discussion about different points of view. But nobody on earth is qualified to make definitive statements on this topic.

Maybe it would be easier to grasp if we knew what the phrase is an analogy for.

I don’t believe that the brain, or the body/brain system. or whatever is a psychic ‘radio’ which picks up signals from some poorly defined ‘elsewhere’; but if it is, duplicating it would create two locations which could both pick up signals from the same transmitter. This would create an entity that exists consciously in two different locations - a state of affairs that certainly wouldn’t happen if consciousness is purely a physical result of the functioning of the brain/body system.

If you duplicate a consciousness that is purely the result of the brain-body system, you just get two consciousnesses which are (momentarily) the same, but which are not linked by a psychic radio link or any such garbage.

I couldn’t begin to imagine what happens if there is a soul involved, so I won’t try.

As to the first point, I think it has nothing to do with consciousness. The waveform would collapse in much the same way if “observed” by a camera or other non-conscious machine. (Besides, I’m all gaga over the “many worlds” interpretation anyway. Whee!)

I don’t quite get your second point.

Total disagreement. It is, rather, arguing that atoms produce the fire you see in your fireplace. We may not know every last detail of every chemical combination that occurs in combustion, but it must be caused by atoms. This isn’t attributing magical properties. It’s merely a recognition of the absolute absence of anything else involved.

If you have evidence of something other than brain cells contributing to consciousness, the world needs to know it. But, to date, there is no such thing. That, and not the materialist null hypothesis, is the idea subject to being called magical.

Occam’s razor. Why postulate two entities as an explanation when we only need one?

We know that the brain is involved in the production of consciousness because damage to the brain can make consciousness go away. So now you say that maybe something else is involved as well. Okay, well, where’s the evidence that the brain is receiving these external signals? Where do we see neurons behaving in a way that would suggest they’re being influenced by some external force? What justification do you have for postulating the existence of this extra entity?

To use your radio analogy, if we studied a radio without knowing about radio waves, we would notice anomalous current fluctuations in the antenna that could not be explained by the circuitry attached to it. But if we could find no evidence of external influence (say if we were analyzing a CD player instead of a radio) we’d be justified in believing that no external influence existed, even if we did not yet fully understand every element of the system producing the music.

External to the physical world is what supernatural means.

The big problem with the Chinese Room is that it has no state in it. It is a complex mapping from input to output, with no learning possible. Any computer so limited with no internal registers or memory - could of course never become conscious. Whether a computer can is open, but the Chinese Room analogy has nothing to say about this. For instance, a computer can respond to the same stimulus in different ways depending on its internal state. The Chinese Room cannot.

Algorithms are not by nature adaptive, but some computer programs are, and the brain certainly is.
There is also the distinction between Program and Process. Every copy of Word on a disk of the same release is identical, but each copy of Word running in different computers is different. If you diffed two versions of the program, they’d be the same, if you diffed even the same disk at different times they’d be very different. If you somehow could take a snapshot of the layout of the synapses of the brain, it certainly wouldn’t think. Consciousness is clearly involved with the dynamic of the synapses interacting with themselves and the outside world.

The idea that quantum measurement is as simple as pointing a device that is capable of detection in the right direction clearly cannot be true. On one hand interaction beyween the measured particle such that it is possible to retrieve the result of a measurement is not sufficient for a measurement to occur, for example the quantum eraser experiment; on the other hand the particle that is being measured needn’t even interact with the measuring device for a measurement to occur, for example the Renninger negative-result thought experiment (NB if you object to the fact that this is a thought experiment, interaction-free measurement has been performed as part of actual experiments). Even though actual collapse does not occur in the many-worlds interpretation, it still needs to explain why it appears to occur (obviously decoherence enters into it, but there’s still a whole other can of worms that need to opened: for example defining what exactly a world is in which the result of the measurement exists).

What I am saying is that, assuming consciousness exists, we are conscious beings and we have brains and to assume that you can completely describe the experience of such a being independently of this may be erroneous. For example the many-worlds interpretation leads to extensions such as the many-minds interpretation when trying to relate it to the experience of observers.

This has not been established. That’s why this discussion exists–no one has shown how consciousness is created solely by this one entity. That does not mean that that’s not possible, but it has not been explained.

When you damage a radio it stops receiving.

You find what you have the tools to detect. You make tools to detect the things you know, or suspect exist. You have no tools to detect something that you do not understand well enough to be aware of its existence.

No one said anything about being external to the physical world, just external to the brain.

It’s held by many; it’s the “engineering” collapse theory. An observer doesn’t have to be conscious.

(And, besides… What if a dog made the observation? Dogs are conscious, certainly. A cat? A mouse? A silkworm? An euglena? The idea implies that consciousness is binary: either a system has it, or it doesn’t. But that doesn’t match animal behavior studies.)

May be? Sure. But where’s the evidence? We do a remarkably good job getting around the limitations of our consciousness. We employ double-blind protocols, to bypass our tendencies to bias. We use machines to count electrons, because we know we are susceptible to optical illusions. (“That wasn’t an electron, that was just a vitreous floater. D’oh!”)

Show us where our very consciousness is a source of bias, and you might have a stronger point.

(Or…I am very likely completely misunderstanding what you’re saying. That is a limitation of consciousness I’m entirely open to!)

Our Western conception of what is called Maya in Hindu tradition is that Maya is a veil through which our consciousness struggles to see reality. It is consciousness itself that is actually Maya, and the illusion is our perception that we are separate from the rest of that which exists. As the disgraced Woody Allen says, the brain is the most overrated organ of the body.

Aside from spotaneous collapse theories which are modifications of quantum mechanics rather than interpretations, it’s problematic to view collapse as a purely physical process (aside from interaction-free measurement you also have the problem that explaining why collapse as a purely physical process is different from all other physical processes in that it is non-unitary). The truth is the issue is unresolved and until such time someone comes up with a 100% satisfactory resolution to the measurement problem that doesn’t involve conscious observers you can’t a priori exclude them from such discussions.

Whilst that is a valid point, in some ways it’s not necessary to tackle. If I’m only making predictions about what I will experience then my ability to predict what others experience will be limited to their ability to report their experiences to me/my ability to ‘measure’ their experiences.

To be clear not assigning some special magical property to consciousness (I am just assuming it is some property that I am in possession of) nor am I even saying it plays a definitive role in observation. I am just trying to demonstrate why it can’t be casually excluded from such discussions.

Again, to take many-worlds theory: MWI says that we are all in superpositions of states, yet we don’t experience this and as a result we are extremely restricted in the parts of the Universal wavefunction that we can directly investigate. This is a possible limitation imposed by our consciousness.

My name is John, not Maya.

Maya is everywhere.

(And sex is Maya when you aren’t getting any…)

Well that’s just it; we don’t know that a computer can do all that the brain does. We don’t know if there are things it does that are non-computable (as advanced by Roger Penrose), or if the neurochemistry of the brain is critical (and a simulation of same is insufficient).

The key point is, any process can be considered a computation. However, a computer is a general-purpose, programmable machine. Not every process is that.

Right, and technically nothing in science is a fact, but the level of confidence or doubt for these things differ.
Even proponents of the position that the brain is a computer will generally concede that there is plenty of room for doubt.

So this isn’t an analogy any more? You’re actually talking about some sort of signal that carries consciousness to the human brain? This would have to be a signal that cannot be detected sent to a receiver that cannot be detected, a signal that cannot be blocked by any known means(otherwise there would be places on this planet that would cause you to lose consciousness because something there blocked that signal).

Just leave Space 1999 out of it, please. And she wasn’t everywhere, only in the second season.

The ‘receiver’ analogy is clearly flawed: what happens to the brain influences what happens in consciousness, so we’d have to have at least a two-way radio. But then, the view just collapses to ordinary Cartesian dualism: there’s some ‘res cogitans’ that the brain, in some way, exchanges signals with, producing a specific kind of conscious experience.

The problems with this view are obvious: if the signals are physical, then one needs to find a way to explain how our brains—after all, relatively gross and messy things—can detect them, while our precision experiments can’t; furthermore, there ought to be some sender somewhere, itself physical, which produces consciousness and ‘beams’ it to our brains. But then you’re back to square one, having to explain how the sender may give rise to consciousness.

Or, either the signals or the sender are non-physical. Then you have the ‘interaction problem’ that befalls every substance dualism: how does one substance ‘act’ on the other, if they’re different in kind? Let’s say it’s somehow possible, and the ‘res cogitans’ somehow acts on the physical (via the pineal gland, say). Then, it must have some causal power. But actually, the causal power is all that we know about the physical: how it influences us and our measuring apparata. So how was the res cogitans supposed to be nonphysical again? You can’t knock over physical pins with a nonphysical bowling ball.

The result of these problems is that there are very few, if any, true substance dualists left among philosophers.

I don’t think that’s an essential limitation: the ‘rule book’ may well include conditional statements, in the manner of a ‘choose your own adventure’-style novel, thus effectively realizing different ‘states’. If such-and-such an input is received, go to page xxx; then, any further input will elicit a different response than before. In fact, it seems to me that to even remotely be convincing, something like this would have to be implemented, as language is highly context-dependend.

In fact, you can always easily modify the ‘program’ implemented in the room by any means you deem necessary—this generality is the strong point of the argument. You can allow the addition of modifications, i.e. have the person inside amend the book in a way the book tells him how, you can add memory in a similar way, you could even have the whole thing be something like a genetic algorithm or PAC-learning; it doesn’t matter for the conclusion. As long as you can tell a computer how to do it, you can write the rules for doing it into a book, and thus, have the Chinese room implement it.

Most people conceive of the book as merely being a giant lookup table, but there’s no need for it to be; in fact, such an implementation would be trivially easy to defeat, just by asking ‘What was the last thing I said?’, or anything that depends in a similar way on the context of the conversation.

And neither do we know that there is a machine that can.

The Penrose-Lucas argument is very flawed (and I think Lucas has conceded that by now). As an analogy, consider the sentence ‘Mijin cannot assert the truth of this sentence’. It’s clearly true, and thus, you are limited in what you can ‘prove’; I, however, am not so limited, so clearly, my reasoning powers are of a higher nature than yours.

It’s not hard to see what goes wrong here, of course; so the argument doesn’t have any force at all: we can’t conclude anything about our relative reasoning capacities from it. And the same goes for the Penrose-Lucas argument.

As for the neurochemistry being relevant beyond its causal powers (since those can be perfectly well simulates), besides being wholly unclear to me how that would work, I’ve never understood why somebody would want to advance this and similar ‘biologist’ positions; it seems to me you get the worst of all possibilities: since the bilogical is a subset of the physical, it doesn’t help you avoid the popular antiphysicalist arguments; but you now have the additional responsibility of explaining why one needs a particular ‘biological’ system to give rise to consciousness. If I give you two systems, one biological and one an identical non-biological simulation that reacts to all your probes in the same way, how would you tell which one is the right one? What’s the principled difference between the biological and the rest of the physical world?

Well, I guess we’ll have to agree to disagree here: I think the definition of computer as ‘general-purpose, programmable machine’ is much too restrictive; it excludes, for example, the majority of all Turing machines, all finite state automata, my pocket calculator, and a lot of other things I would attach the name ‘computer’ to.

This sounds like you think I am arguing for the thesis that the brain is a computer, but I’m not; I’m arguing against the thesis that it can be a machine, but not a computer.

I think that all of the classic anti-computationalis arguments can be turned into anti-machinist arguments simply by considering them to be instantiated, in a Leibniz-mill like way, as some enormous cogwheels-and-gears apparatus, or whatever else machination you prefer: you will find no mind between the gears, no understanding in the Chinese room with the man inside replaced by some Victorian automaton, and so on. Computation is not that which a particular sort of machine gives rise to, but an abstract way to talk about the structure of what machines do; the claim is generally that this structure is insufficient to give rise to consciousness, that you don’t get semantics from syntax. I don’t see how considering some different sort of machine is supposed to help here.

[QUOTE=Half Man Half Wit]
…the claim is generally that this structure is insufficient to give rise to consciousness, that you don’t get semantics from syntax.
[/QUOTE]

In the end, this is all the Chinese Room argument says; the man in the room is irrelevant, since he could be replaced by a trained pigeon, or a set of cogs, or a Turing machine/paper-tape combo.
The Chinese Room should be framed as a question - how can a collection of biological processors and associated memories (and other systems) understand Chinese?

If you can’t get semantics from syntax using purely physical processes then there must be some supernatural aspect to the thing. If you can get meaning from data without invoking the ineffable then this process can surely be replicated in one way or another. The Chinese Room argument adds nothing to this except a misleading intuition pump.

We started by analyzing claims such as whether a computer could emulate a mind and indeed be a mind. And questions of identity such as whether a perfect snapshot of your mind is you; whether your consciousness can be transferred.

Now the answer that many Computationalists have for most of these things is dead simple; consciousness is essentially a program, it can be copied, duplicated etc.

However, it would be quite a sleight of hand to define “computer” so broadly that it encompasses anything and everything, then use that to support the above claims. It’s a sleight of hand because the kinds of statements we can normally say about computers e.g. “Any Turing machine can emulate any other” are not trivially true for our new expanded definition of computer.

And I’m advancing no position on this either (I had a position on the identity issue, but that’s all).

I’m just arguing that it is not a fact that the brain is a computer. And I don’t mean in a technical “nothing is a fact” way. I mean there is plenty of room for doubt and lots of opposing ideas.
And we don’t have a model of consciousness yet. Just having the argument of Well, what else could it be? is virtually worthless.