What Is Consciousness?

Great link, thanks for sharing

AHUNTER3 knows that consciousness exists because he experiences it. However, he can’t assess whether you (or anyone/thing that is not him) are conscious. Why not? I assume it’s either because he cannot be sure you fully understand the question (i.e. “what is consciousness and do you have it?”), and/or because he cannot be certain you are telling the truth.
[/quote]

No, not at all.

I’m experiencing my consciousness. I’m not experiencing yours. I can see a large number of external signs that point towards the likelihood that your behaviors are indeed the result of conscious thought, but that’s not the same as knowing for sure.

My consciousness can’t be an illusion; illusions are, by definition, something that consciousnesses experience.
[QUOTE=Half Man Half Wit]
In other words, if consciousness is just an illusion, then who’s being fooled?
[/quote]

Exactly.

Well, wait… In a typical optical illusion – like those arrows that are the same length, but one insists on looking longer than the other – “who’s being fooled?”

It isn’t “me” as a conscious person being fooled. It isn’t my eyes being fooled. It’s a complex of brain cells that interpret incoming nerve impulses that’s being fooled.

I, as an individual, am not fooled in the slightest: I very clearly see that one arrow is longer than the other! It’s just that the “evidence of my eyes” has been mislabeled by my brain when it passed the “caption” (so to speak) of the picture up to me.

The illusion of consciousness works the same way: the “person” isn’t deceived. In that view of consciousness, the person doesn’t even exist! A decision-making network of neurons has been fed misleading data.

From the engineering viewpoint, this isn’t even debatable. A whole bunch of really tiny machines all got together and tricked themselves into thinking they were thinking. A bunch of printed circuits – or Tinkertoy dowels and hubs! – could be made that tricks itself, exactly the same way.

The philosophical model is not so strongly agreed upon.

People aren’t really conscious, but “tiny machines” or circuits can be tricked, and can even can trick themselves, can they? :rolleyes: The fact is that circuits in themselves cannot be tricked at all (or fail to be tricked, come to that), whatever engineers may be in the habit of carelessly saying them. Circuits can only shuffle charge (or ions, or whatever) around. Whether they do it right or wrong, or whether the patterns of charge they produce “indicate” anything at all, is meaningful only in the context of the interpretations and purposes that people, conscious people, attribute to them (or the purposes that engineers purposefully build them to serve, in the case of artificial machines). They certainly have none of their own.

Of course it is. Whether or not you “know better” (if you happen to have read about the particular illusions in question) the lines look different to you. You can’t shake that conscious impression, no matter what you may “know”.

Well, you modified what I said a little, so your roll-eyes mockery is a little undeserved.

This, too, is not what I said, and is unfair, not only to me, but to engineers also.

That’s the question of composition and emergence. Are two circuits, linked together, just another circuit? At what point is there a meaningful different in what the composite circuit consists of? If two neurons aren’t conscious, but two billion are, then where, exactly, is the line drawn?

Religious people have it easy: they just say “the soul” and walk off self-satisfied.

An engineering approach is much more difficult. There aren’t any materialistic “breakthrough” points in machines. Yet there seems to be in the assembly of neuronal machines that are our brains.

This, again, comes down to a matter of composition. Part of me knows, and part of me doesn’t know. The two parts are not in perfect communication. Thus, the faculty of perception is divided against itself. I know the two arrows are of the same length…but I perceive them as being different. The constituent parts of the brain are now engaged in a great civil war.

The question of consciousness is, I think, similarly to be defined in questions of composition. But I immediately confess to being a scientific reductionist. I like splitting atoms to see what they’re made of.

The problem here is purely semantic. What we have is essentially an extremely complex machine whose internal processes essentially simulate a conscious experience. To the machine itself, electrical signals going down the various paths lead to an experience. If you define an illusion that way, then it is paradoxical. But as said, that’s just semantics.

Yes, but the point, I think Trinopus is making, is that there is no real single “you”.
We think of ourself as an individual, but we are more of a composit.
Like how our “body” is the sum and interaction between different organs, so is our brain composed of different areas.
Different parts are vying for attention, so to speak.

There’s the “you” having an idea or an impulse. Another “you” is formulating it into language, yet another “you” is formulating objections together with a “you” that’s saying: “Meh, I just want to sit and watch TV”.
Not to speak of that bastard that is constantly nagging: “shouldn’t you be doing something useful!?”

The problem is that you can’t differentiate between ‘you’ and those brain cells, on pain of the homunculus fallacy: you posit this ‘you’, the homunculus, looking at the brain cells, perceiving their output, but this of course incurs the problem of how the homunculus itself then perceives its internal representation, etc. So I think that is indeed you who is being fooled.

So the question is, can something that lacks subjective experience be fooled into ‘having’ subjective experience? Well, one could certainly imagine a robot with some particular visual faculties acting as if it sees something that isn’t really there, so in that sense, it would be ‘fooled’ into believing something that isn’t the case. It could even act as if it had subjective experiences. But there’s no reason it actually should have them (or if there is, nobody’s yet given one—and indeed, compared to almost any other problem, this one seems to be unique in that I can’t even see what a solution might look like).

Consider that having subjective experience essentially means that there is something it is like to, e.g., see red. Now suppose you were only fooled into, somehow, believing there were something it’s like to see red. Either there is something it is like having that belief—but then, you actually have subjective experience, albeit of this belief, rather than of the redness of red (which might for all intends and purposes be indistinguishable). Or, there isn’t something it is like to have that belief—but then, you simply wouldn’t have any subjective experience; you’d be in the situation of the robot above (or at the very least, you’d still be in the position to have to give some mechanism of how subjective experience necessarily accompanies some entity’s being in that position, meaning that the ‘fooled into believing’ argument doesn’t do any work). This iterates, of course—if you are now to posit that you’re only fooled into believing that there’s something it’s like to believe that there’s something it’s like to see red, then you’ve got the same problem all over again.

For a solution of the mind/body problem, what I’d like to see is an account of how those pictures in my head come about, of how neurons shunting around signals become an inner world, and the feeling of what it’s like to be in that world. Such an account can easily be envisioned, say in the case of symbols on a computer screen; nobody can really imagine in detail how tiny voltages being shunted around lead to particular pixels lighting up at certain definite times, but there’s no essential mystery—it’s easy to imagine how the process works. This doesn’t seem to be the case with consciousness.

Certainly, one might argue that a certain (reductive) style of explanation has, so far, always worked, and that thus, we might just not yet be at a point to bring it to bear. And that may be the case, and perhaps somebody eventually will deliver an explanation like the above one for symbols on a screen. But in a sense, the problem is unlike any other in that it doesn’t exclusively include objective elements of the world, but also subjective ones, and their interplay. So in order to apply our usual method of explanation, we first have to ‘objectify’ the subjective—but the subjective then has an annoying habit of just disappearing. So it doesn’t seem to be a given for me that our usual explanation necessarily will work in this case.

That’s not an argument for dualism, souls, or anything along those lines. But it is an argument for considering the world in somewhat broader terms than usual, I think.

From the outside, I cannot assess whether your machine does or does not represent the success of artificial intelligence. Maybe what we’re seeing is just an extremely complex machine whose deterministic processes simulate a conscious experience.

But if I myself am the damn machine, I’m either conscious or I’m not; it can’t be an illusion to me. If I’m experiencing myself, then yeah you’ve got an intellgent conscious machine here.

I agree with this, but the fact that the “real me” isn’t an individual but a composite does not take away from the assertion that the “real me” (wherever “I” may be located) is indeed conscious.

True, or maybe it is exactly this “poly-logue” that’s going on, that constitutes what we term conciousness or self-awareness.
Maybe non conscious creatures are missing one or more of these layers.

Grin! And, yes, this is what I was trying for.

I’m going in almost exactly the opposite direction from the fallacy of the homunculus: I’m breaking the mind up into constituent parts. At the base level, it’s just atoms; at another level, it’s just neurons.

What fascinates me is that some of the structures are operant. Not homunculi, but the visual cortex, which, independent of “me” does image processing.

I can say that the visual cortex is fooled by the illusion of the arrows, without saying that the visual cortex is a “homunculus.” The visual cortex is the “thing” that is fooled, and not “me.”

(In roughly the same way, when the doctor bumps my knee, it isn’t “me” that jerks my leg. It’s some lesser component, one of my parts, but not “me.”)

I want to start by apologizing for not addressing the whole of your post, which was deep, thoughtful, rich, and (a) too long for me to respond to meaningfully, but, worse, too deep for me to respond to. I’m out of my depth here. You very clearly know more than I do, and I don’t want to be like the idiot kid who refutes Einstein, but can’t do square roots.

To me, the place where it gets fascinating is that atoms do not have subjective experiences…but things made up of atoms do.

Somewhere in this vast assembly of atoms, some level of complexity is attained, so that the composite entity has subjective experiences. The “heap” has consciousness, even though its parts don’t.

What really makes this convincing to me is that lesser composites have partial consciousness. Dogs and cats have self-awareness. Snakes have something very like emotions, at least those of anger and fear.

Also, we, as people, can exhibit diminished consciousness. Drunkenness is one example. People who have had horrible cerebral strokes are another. As a teenager, I used to go sleepwalking. I’d get out of bed, go out into the living room, and even pick up the phone and try to dial it. Completely unconscious!

All of this, to me, says that the mind is made up of parts and portions – the mind is a committee!

I can see how, from this, you might think I am subscribing to the fallacy of the homunculus, but I’m pretty sure my point leads exactly away from that fallacy, not toward it.

I don’t know. I am extremely reluctant to go in that kind of direction. “Broader terms than usual” is too often New-Age code-language for some kind of anti-scientific woo. I know you well enough to know you aren’t arguing for that, but the phrase still raises my instinctive hackles.

Well, yes and no. There is still some room for the notion that consciousness is an illusion. I don’t hold this view, I hasten to add. I believe in human volition (if not theological “free will.”) I think that the “self” is real, but perhaps in a kind of abstract sense of what “real” means.

(e.g., are prime and composite numbers “real” or are they only an idea we’ve made up? Not an easy question to answer; I’ll happily argue either way on that!)

If “self” and “consciousness” are illusions, they’re absolutely convincing ones, so close to real that it is nonsensical to refute them. It’s like a stage magician sawing someone in half: if there’s a body on the floor, and fountains of blood, and the coroner comes in and takes the two parts away, then the fact that it was all just an illusion isn’t going to keep the magician out of prison!

There’s some really great insight here and conversation on a thankfully higher level than some previous discussions on consciousness may have gone. On the point in the last post about atoms and molecules not having consciousness, though organisms like us that are made of atoms and molecules do, it got me thinking that what if similarly we humans are like the atoms and molecules of a larger being of whom we can only scratch the surface of that larger consciousness. Maybe there is not so much your consciousness and my consciousness so much as a collective consciousness that we are sometimes able to tap into. Just wondering.

About prime and composite numbers: some regard numbers as imaginary, some as real. I see numbers as representing real life laws of physics and such that we do not invent so much as discover. Thus the properties of a binary star would hold true, for example, whether or not human beings had come to exist. Does the wisdom that holds the universe together reflect a larger consciousness (what some might call God or Nature), or is it necessary to ascribe consciousness to the universe at all? Does everything run on its own without any intelligence to control it? My gut says it does and that the concept of any master controller is more of a human conceit, based on our own inability to remove ourselves from the equation. But I’m only guessing.

But at some point, this processing does become conscious, causing you to have a conscious experience that does not accord with reality—it is ‘you’ being fooled. Trying to keep the neurons in V1, or even in the retina, that do some pre-processing separate is like pre-processing some image on a computer, say via photoshop, and then claiming ‘the computer is being fooled’, since you perceive the image it shows accurately.

Well, society of mind-like models, I think, have some things going for them, when it comes to explaining our abilities, the modular, parallel processing capabilities, for example, but I’m not sure I see how they address the problems inherent in perception and intentionality. I mean, how does it help having instead of one big homunculus, a hundred (or a thousand, or a million) smaller, stupider homunculi? I mean, either every tiny homunculus is (even dimly) aware—but then, how? The problem is one in principle, not one of degree, it seems to me. Or, the tiny homunculi aren’t aware—but then, how does awareness arise from a cacophony of unaware agenty? Again, the problem seems to remain.

Plus, you incur a really hefty version of the binding problem—as William James formulated it in 1895 in The Principles of Psychology:

[QUOTE=William James]
Take a sentence of a dozen words, and take twelve men and tell to each one word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; nowhere will there be a consciousness of the whole sentence.
[/QUOTE]

In other words, I may have neural coalitions that signal various straight lines, various angles, and colour, and attendant mini-homunculi—but I don’t have consciousness of these things, rather, I have a conscious experience of, say, a red triangle.

But is it really reasonable to disregard something because it raises your ‘instinctive hackles’?

I don’t think that the phrase ‘consciousness is an illusion’ does any explanatory work at all. You’re still left with the task of explaining how it could be an illusion—which I don’t really think is a different task than explaining consciousness per se. I mean, in a sense, you could say my arm is an illusion, because ultimately, it’s composed of various bones, muscles, nerves, blood vessels and other tissues, which are in turn composed of various cells, made up of, ultimately, atoms, etc. But it’s not a useful way to think about my arm; indeed, the designation ‘my arm’ picks out a specific collection of atoms that you’d have no way of referring to at the atomic level (atoms come and go, my arm stays).

If the constituents are aware, but less aware, then we could have a gradual spectrum of consciousness, rising piecewise.

We wouldn’t have a sudden miraculous threshold, but a process of building and assembling. A person’s 100% consciousness is the product of 50 sub-units each having eight % consciousness; each of those is made of neural nets having .5% consciousness; and it is all ultimately made of atoms having zero consciousness.

We already experience a kind of reductionism in cases of diminished capacity for thought due to accidents, drug use, extreme mental illness, and so on. These indicate that consciousness can be degraded, yet still present. A “knife edge” model wouldn’t sufficiently explain this. Some sort of “continuum” model is needed.

In open discussion of this sort in a BBS forum? You bet!

It’s just a rhetorical way of explaining how atoms can think. It is a rebuttal to those who insist that there must be a non-physical “soul” that is actually conscious. It is a way of reminding some skeptics that a computer, having all the same properties of organization as a brain, can actually “feel.” Such an instrument would “have qualia.”

If such persons insist that a computer would only have the ability to emulate consciousness, but not truly experience it, the rebuttal for them is that we, too, only emulate consciousness, but do not truly experience it. i.e., there is nothing “magical” about our minds, that any other machine system cannot perform.

If you aren’t making that argument, then the rebuttal isn’t aimed at you.

To me, that really is the key miracle here: a bunch of atoms can get together and think.

(I’m still mad at Roger Penrose for “The Emperor’s New Mind.” One of the most shitty books I’ve ever had the misfortune to read.)

The illusion isn’t that you saw red, the illusion is that you’re the one who saw it. Your brain recorded that an image was seen. For that story to make sense, there needs to be an observer. so it invented one. The current observer has access to most of the past memories, so it believes in its own continuity. It can reflect upon itself, so it believes in its privileged existential existence.

And there’s nothing about the experience that is unreal… it was certainly a real experience… the only illusion is that “you” experienced it.

The Jumping Spider has something to tell us about consciousness, I think. Like many predators at a similar level of sophistication, a jumping spider is not self-aware; but it uses its magnificent arthropod eyes to observe the behaviour of its prey, and model it so that it can anticipate the prey’s future behaviour. This is how it hunts.

This modelling and anticipatory ability is the key to self-awareness, I think; humans (and possibly some other animals) have turned this modelling ability back upon themselves, so they can anticipate the behaviour of the several non-conscious agents which make up our minds. Better still, with the facility of language we can tell each other about these self-referential models and convince each other that we are conscious. But actually we are ‘just’ continually modelling ourselves.

eburacum45: total agreement.

Among other things, this is an interesting explanation for the weird sensation many people feel when standing at the edge of a cliff. Many of us have the sense of an inexplicable urge to jump off!

(Edgar Allan Poe described this in “The Imp of the Perverse.”)

The explanation could be that our brains are constantly putting forward options for us. Break for lunch? Keep working? Soda? Coffee? Potty break? Just hold it for another ten minutes? etc.

When we’re standing at the edge of a precipice (“I see what you did there”) one of the possible options is…to jump. It’s a rotten option, and we reject it with some emotional stress. But the option-maker doesn’t know that. Its job is simply to put forward all possible options, to be assessed by another functionary entirely. So it keeps doing its job. “You could jump…” And the rest of the committee says, “That’s nuts!”

The same thing happens when you have to bite your tongue to keep from saying what you really think to your boss or mother-in-law. The option-offerer suggests, “You could tell the truth,” and the diplomatic censor says, “BAD idea!”

But where, exactly, does that get you? I mean, the question then just becomes: how do you get to 0.5% consciousness from none? Or to 0.05%? Or if you want to introduce a continuum of consciousness, then how does one get onto that continuum using only materials that aren’t already on it?

And it introduces the further question, how do those 50 sub-units with 8% consciousness combine into an experiential whole, 100% consciousness? If I the neurons detecting edges in my visual cortex are suppressed, and the neurons detecting color in yours are likewise, and we both look at a red square, then there wouldn’t be any consciousness of a red square—I would be conscious of redness, you of the square shape, presumably. How does that change if we’re just two homunculi in some boney cage?

I’m not at all convinced by the notion of ‘diminished consciousness’. It seems to me that all the examples you pointed to can equally well be regarded as being fully conscious of diminished stimuli. There either is something it is like to have an experience, or there isn’t. Take the most impoverished experience you can think of—say, an elementary distinction, a conscious experience that can be either this way or that way. Perhaps consciousness of the difference between light and dark. One bit of consciousness, so to speak. But if you’re conscious at all of there being light, or of there being no light, you’re fully conscious of that. How could you be more conscious, or less conscious in this case?

You can, of course, add to the sensory manifold, by adding more bits of information—color, shape, audio, etc. But this isn’t being more conscious—it’s being conscious of more information. The same goes for drunkenness, half-sleep, and so on—what arrives in your consciousness may be a horribly jumbled mess, but you’re fully conscious of that mess (even though you may not remember it). So it’s not clear to me, what, exactly, it would be like to be less aware, say, of the distinction between dark and light, and if the notion makes sense at all.

That’s a strange yardstick to use.

But exactly what is it that is explained by calling consciousness an illusion? It seems to me that the notion is wholly inert.

I think that self-modelling is also a red herring regarding consciousness—you can pretty easily design robots that engage in some simple self-modelling, but this doesn’t imply those robots are conscious. You can have a model of your ‘self’, in the sense of your body, its position, and actions, in the world, without there being anything it’s like to have that model, i.e. some robot would not need to have access to a phenomenal manifold in other to engage in self-modelling. But that phenomenal manifold is exactly what we want to explain; so if self-modelling is independent of it, then it’s hard to see how it could be of any help regarding the problem of conscious experience.