My take on consciousness (rather long)

I know there have been many threads on this subject, but I wanted to start one to specifically present my beliefs on the matter (as, admittedly, a person with only very minimal philosophical training) and to ask three specific questions:

  1. Do you agree or disagree with my take on the matter, and why?
  2. Do you consider it a valid (i.e., not logically inconsistent) view, regardless of whether you agree with it? (And assuming it’s a not uncommon view, is it generally regarded as valid?)
  3. Is there a name for my particular position regarding consciousness? (So I can go around saying “I’m a panafanabannanatist,” or whatever. :smiley: )

I’m not trying to convince you I’m right. First of all, there are some points in my line of reasoning where there’s no real evidence one way or another, so I presume things which I think are reasonable in the absence of any evidence to the contrary. You may disagree with these assumptions. And as I say, I don’t yet know much about the subject, so there may be well-known objections to this point of view that I’m not aware of. (I’m interested in reading more on the topic, if you have any recommendations).

Anyway, my take on things is this:

My brain is a physical device which receives information from my nervous system, stores it, and processes it to produces various outputs. Some of these processes I experience consciously (e.g., my decision to type this post), while others I do not experience consciously (e.g., my brain telling my heart when to beat).

Thus, I can conclude that the process of taking in information, processing it, and producing output can happen either consciously or unconsciously. An information processing system may thus either have consciousness or not have consciousness (or, as in the case of my brain, parts of it have consciousness while others do not).

Here by “have consciousness,” I mean “it is consciously experienced by something,” in the sense of the conscious experience of which I’m directly aware. Whether that conscious experience is produced by the information processing system (e.g., my brain) is a separate question (which I discuss below).

I suppose it is possible that my “unconscious” brain activity is experienced to by some other consciousness, which is distinct from my own in that I don’t have direct awareness of it. However, I have no evidence that such a “conscious experiencer of my ‘unconscious’ mind” exists, so I presume that it does not.

Moreover, I note that those processes which usually happen consciously can also happen unconsciously. For instance, I can walk or speak while sleeping without being consciously aware that I am doing those things. So it is not true that the decision to speak or to walk about the room “must” be conscious. Even in the case of processes which I am always conscious of, there is no particular reason to think that some entity couldn’t perform those processes unconsciously (just as some people sleepwalk while others do not).

So I see no reason to think that a certain process must be conscious based on the results of that process (be they speech, locomotion, etc.) Likewise, I don’t see any reason to conclude a process is conscious based on the mechanism by which that process takes place. My calculator takes in input when I press the keys, transmits it via electrical impulses, processes that input, and produces output, but I see no reason to think that the calculator is conscious. Even if its wires were replaced with nerves and its memory replaced with neurons, there’s no particular reason to think it’s conscious, as organic information processing systems can also lack consciousness (as in the case of the part of my brain that tells my heart to beat).

Moreover, I don’t see any evidence that a process with more than a certain amount of complexity must occur consciously. The evolution of all life on earth is, I would think, a much more complicated process than my decision to have Cheerios for breakfast, involving the manipulation of vast amounts of data (i.e., the DNA codes of all life on Earth), but I have no reason to think that it is a conscious process. At the very least I can say that there’s no evidence that it requires consciousness (the objections of Intelligent Design proponents notwithstanding).

In the case of other humans, I presume that they have conscious awareness of certain processes happening in their brains. I presume this based on the fact that I am conscious of the analogous processes happening in my brain, and the processes happening in other people’s brains seem to be fundamentally similar to mine. So I can see no particular reason why my thought processes would be somehow selected as the only ones to have a conscious experiencer, and I conclude that other people’s thoughts are also consciously experienced. However, I see no evidence that such processes must be consciously experienced – i.e., I see no reason to think that a person who acts just like me but lacks a conscious experiencer is an impossibility. I only think it is unlikely that a person who is otherwise very much like me would lack consciously experienced thoughts.

So, in conclusion, I see no evidence that any physical process must by its very nature be experienced by some consciousness. Even if I could make every possible physical observation of a system, there I have no reason to think this would prove that that system has consciousness. So far as I know there might very well be a system that is physically identical to me in every way but lacks a conscious experiencer.

In the absence not only of any proof that consciousness is physically observable, but even of any evidence that consciousness is physically observable, I presume that it isn’t. That is, I presume that there is no way to distinguish a conscious process from an unconscious one by means of physical observation. The only entity I can definitively say is conscious is myself, because I have direct experience of that consciousness, not derived through my physical senses.

Thus, it is impossible to determine if an entity is conscious or not, except for the entity in question. (I.e., I can determine that I am conscious, but I can’t determine if anyone else is.) Because I believe consciousness can’t be physically observed, I believe it is not a physical property of a system. So, there’s nothing in the construction of my brain that produces consciousness. Rather, I believe that some external consciousness exists that happens to be experiencing certain thought processes in my brain. And I presume that other such non-physical consciousnesses exist and are experiencing thought processes in other people’s brains. (By physical, I suppose I mean physically observable. Something which can’t be physically detected even in principle can’t exactly be said to have physical existence.)

One might argue that it’s irrational to assume that something non-physical could exist, given no evidence of any other non-physical entities. However, I would counter that consciousness is unique in that I experience direct awareness of it not derived through physical senses, and so there’s no particular reason to believe that it is physically observable. Whereas those things which I experience via my physical senses must by definition be physical, or I couldn’t experience them in that way.


Hopefully I’ve at least presented my thoughts coherently. As I said, this isn’t a subject that I know much about yet . . . so please be gentle. :slight_smile:

Are you implying you feel subjective consciousness is a non-physical source? You always struck me as the type of would prefer materialism.

Well, cemi field theory feels that consciousness is an EM field. There seems to be some evidence for this in the form of synchronious neuron firing

(I can’t find a link with a more professional title)

http://www.mindcontrolforums.com/news/electromagnetic-field-theory-of-consciousness.htm

Recently, synchronous firing of neurones has received considerable attention as a possible route towards conceptual binding stimuli (Eckhorn et al., 1988; Eckhorn et al., 1993; Eckhorn, 1994; Engel et al., 1991a; Engel et al., 1991b; Fries et al., 1997; Gray et al., 1989; Kreiter and Singer, 1996). For instance, Wolf Singer and colleagues demonstrated that neurones in the monkey brain that responded to two independent images of a bar on a screen fired asynchronously when the bars were moving in different directions but fired synchronously when the same bars moved together (Kreiter and Singer, 1996). It appeared that the monkeys registered each bar as a single pattern of neuronal firing but their awareness that the bars represent two aspects of the same object, was encoded by synchrony of firing. In another experiment that examined interocular rivalry in awake strabismic cats, it was discovered that neurones that responded to the attended image fired in synchrony, whereas the same neurones fired randomly when awareness was lost (Fries et al., 1997). In each of these experiments, awareness correlated, not with a pattern of neuronal firing, but with synchrony of firing. Singer, Eckhorn and others have suggested that these 40-80 Hertz synchronous oscillations link distant neurones involved in registering different aspects (colour, shape, movement, etc.) of the same visual perceptions and thereby bind together features of a sensory stimulus (Eckhorn et al., 1988; Singer, 1998). However, if synchronicity is involved in perceptual binding, it is unclear how the brain uses or even detects synchrony.

So consciousness may not be an untestable, unknowable force. It may be related to synchronous neuronal firings, which can be detected.

Well, I can see why you’d think that, seeing as I’m studying physics. Part of me would like it if everything could be explained by physical interactions. But consciousness is the one area where I have really hard time justifying the assumption that we’re dealing with an observable physical phenomenon. Mostly because I can’t conceive of a way that we could detect consciousness, even in principle.

I draw a distinction between what I’m calling “consciousness” and the way the brain processes data. Consciousness to me is the distinct sensation I have (and which I assume other people have) of experiencing things. To look at your example of synchronous neural firing, that might explain something about how the brain works, but how do we know that that’s what explains consciousness. Even if we determine that all human brains exhibit synchronous neural firing, and I assume without proof that all brains have consciousness , I can’t know that a system with synchronous neural firing but without consciousness is impossible.

Before the existence of the computer, there are other properties of human brains that we might have thought were the source of consciousness – the ability to make decisions based on a combination of input and stored data, for instance. But computers can do that, and I don’t see a good reason to assume they’re conscious. If someone comes along and builds a computer that shows the equivalent of synchronous neural firing, should I assume it’s conscious?

Basically, if we want to determine what causes consciousness, we need some way to distinguish between what is or isn’t conscious – and I’m not convinced that’s possible. I’ve read a bit about the Turing test, but that doesn’t seem to address what I mean by consciousness. I have a specific sensation of experiencing things. I see no reason to assume that an entity capable of duplicating my behaviors but lacking that sensation couldn’t exist.

To put it another way:

How would we determine that property X of the brain causes consciousness? We could take away X and see if consciousness ceases. But how would we know consciousness ceases? What if removing X caused a person to become unable to move or speak but they were still fully aware of what was happening. Well, we could monitor their brain activity, but without already knowing what brain activity represents consciousness we couldn’t tell whether it had ceased or not. (Again, I’m distinguishing “consciousness”, the specific sensation of awareness, from brain activity in general.) OK, but we could restore X, hopefully returning the person to normal, and ask them if they were conscious during the time in question. If they say “yes”, then X can’t be the cause of consciousness. But if they say “no”, this may simply mean that the removal of X prevented the subject from forming memories. It doesn’t seem possible that a person could say “I distinctly remember not being conscious.” If they’re aware of being anything, then this is consciousness (as I’m using the term).

If I’m right and consciousness can’t be detected even in principle (or rather I suppose a lack of consciousness can’t be detected, if we’re willing to trust a person’s declaration that they are conscious), then it seems like it’s outside the domain of physical science. But unlike the existence of God or other metaphysical questions, I know for a fact that consciousness exists, because I have direct awareness of my own consciousness.

Which leaves me in the position of acknowledging that something must exist which can’t really be studied by scientific means.

I’m sure a world like that could exist, a world without subjective perception that ran on instinct.

The article does say

“In another experiment that examined interocular rivalry in awake strabismic cats, it was discovered that neurones that responded to the attended image fired in synchrony, whereas the same neurones fired randomly when awareness was lost”

The theory implies that consciousness is an EM field and that the interaction of that field with activation potential in our neurons in the form of synchronous neural firings is a sign of conscious intent.
http://www.mindcontrolforums.com/news/electromagnetic-field-theory-of-consciousness.htm

I have earlier proposed (McFadden, 2000) that the seat of consciousness is the brain´s em field and a similar proposal has recently been put forward by Sue Pockett (Pockett, 2000). I therefore examine the proposition that the brain´s em field is consciousness and that information held in distributed neurones is integrated into a single conscious em field: the cemi field.

The cemi field theory makes a number of testable predictions:

  1. Stimuli that reach conscious awareness will be associated with em field modulations that are strong enough to directly influence the firing of motor neurones.

  2. Stimuli that do not reach conscious awareness will not be associated with em field modulations that affect motor neurone firing.

  3. The cemi field theory claims that consciousness represents a stream of information passing through the brain´s em field. Increased complexity of conscious thinking should therefore correlate with increased complexity of the brain´s em field.

  4. Agents that disrupt the interaction between the brain´s em field and neurones will induce unconsciousness.

  5. Arousal and alertness will correlate with conditions in which em field fluctuations are most likely to influence neurone firing; conversely, low arousal and unconsciousness will correlate with conditions when em fields are least likely to influence neurone firing.

  6. The brain´s em field should be relatively insulated to perturbation from exogenous em fields encountered in normal environments.

  7. The evolution of consciousness in animals should correlate with an increasing level of electrical coupling between the brain´s endogenous em field and (receiver) neurone firing.

  8. Consciousness should demonstrate field-level dynamics.

Thanks, this looks really interesting. I’ll definitely read up on it more.

You make a significant jump halfway down your OP: from not considering something impossible (ie. ascribing to it a small nonzero probability) to considering it unlikely overall because of that tiny probability.

…becomes…

You say you believe other people are conscious, then say you believe they aren’t.

What you’re confusing here is what does and doesn’t constitute “evidence of consciousness”. Now, you made the mistake of demanding definitive evidence of consciousness. I would suggest that there is no definitive evidence of anything, since any evidence can always be interpreted differently (witness Young Earth Creationists Flood-based interpretation of the fossil record, for example. Or, even, the nonzero probability that it’s all bollocks anyway.) We can just set forth evidence and leave it up to each other to interpret it.

In cognitive science, there is physical evidence of consciousness. What is it? It’s me asking someone whether they’re conscious or not, and them uttering a physical soundwave “yes I’m conscious.” Now, of course, that’s not very strong physical evidence. We strengthen it by asking further questions, like, what are you conscious of when I, say, show you this picture or say these words or put this electromagnetic helmet on your head. After a conversation of phsyical soundwaves (or physical visual responses, or whatever), we conclude that the person is as conscious as us (while still allowing the tiny tiny chance that we’re talking to an incredibly convincing non-conscious entity). As an analogy, imagine if I told you I didn’t believe in life, and asked you to physically demonstrate it. You would point to a cat, o a plant or a cell and say “There, that’s life”. And I might reply “No, that’s just a thing you say is alive. I want you to show me the actual life.” Again, you would be classifying things as alive or dead solely by their behaviour - there is still no definitive physical evidence you could present.

This method of generating physical evidence regarding consciousness is called heterophenomenology (HP). The incredibly convincing non-conscious person is called a zombie. If you can spare the time, these two excellent essay by Dan Dennett might convince you that HP is a reasonable scinetific methodology for investigating consciousness, and that zombies are impossible.

Thanks for the essays, which I will definitely take the time to read.

SentientMeat is the latter of your links correct? I haven’t had time to read the article yet, but a quick search shows it doesn’t contain the word “zombie”.

No, but it addresses the central question could someone look like they had “qualia” without actually having them?

Still, if you want it more explicit, Dennett uses the ‘z’ word specifically here. (It’s just not quite as concise an essay IMO, but if you’ve got the time in addition to the other two … you clearly don’t have children :)).

Sorry it took me a few days to get back to this. Got a little busy with other things . . .

Yeah, it makes a lot more sense now that I’ve read it and learned what qualia are (or, according to Dennett, what they aren’t.)

After reading the articles linked to above (still haven’t gotten around to the last one), and thinking about it a bit more, I think I’m going to back track a bit on some of my ramblings of a few nights ago. Specifically, I’m willing to buy into the idea that talking to people about their experience of consciousness may be a legitimate way of studying consciousness. However, I think qualia might still pose a problem for materialism. (And I’m not ready to concede they don’t exist – see my comments below.) It’s easy to see, at least in a vague sense, how a physical system like the brain could contain ideas. We use physical systems to encode ideas all the time: ink marks on a page, pixels on a screen, current flowing through wires, etc. But why should a physical system feel a certain way? Indeed, why should it feel like anything at all? The aforementioned idea of synchronous neural firings, tied together by electromagnetic fields or God knows what, may help to explain some aspects of thought (e.g., how certain ideas are grouped together to form a single concept), but I can’t see how they could explain why something feels a certain way.

Even if we assume that we will someday be able to determine how physical changes in the brain alter our qualia, the best case scenario seems to be that we could develop a theory by which one could look at a certain physical system (e.g., a brain) and describe the qualia (if any) which that system is experiencing. Is this a physical “explanation” of the qualia? I guess it depends on what we mean by an explanation, but I’d say it’s a pretty weak one. In general, a physical explanation would involve some sort of description of how the observed property of the physical system (in this case, its association with particular qualia) arises from the behavior of the system’s constituent parts. But what is the qualia produced by an individual neuron? Or, more fundamentally, what’s the qualia of an electron? Since these things have no consciousness (that we know of), or at least no way to communicate this consciousness to us, these seem to be unanswerable questions.

Someone might reply that consciousness is an emergent phenomenon, so of course it can only be observed in sufficiently complex systems – just as a single molecule can’t be identified as solid, liquid, or gas, but a large collection of molecules may be identified as such. However, emergent phenomena can still be explained in terms of constituent elements. If I were to say, “Ice turns into water at 0 degrees centigrade,” this might be an accurate description of the phase transition between ice and water (at least at a certain pressure), but it doesn’t explain why the transition occurs. An explanation would involve saying something like: with a certain amount of added heat, the lattice structure in which the molecules are arranged breaks apart, and individual molecules are free to move independently of one another. The collective phenomenon of a phase transition is explicable in terms of the behavior of the constituent molecules. Even though the individual molecule isn’t in a certain phase, they have measurable dynamics which, collectively, determine the dynamics of the substance as a whole, which define its phase.

But there are no sort of “sub-qualia” for the individual particles of the brain, which collectively make up whole qualia. Or if there are, then these “sub-qualia” can’t be measured, since so far as we know qualia are only measurable in conscious entities (by means of conversing with them). The simplest system that has “measurable” qualia (the brain of a conscious entity – or, at least, a significant portion of a brain) is enormously complex in terms of physics, and we can’t hope to explain qualia in terms of simpler components of the system. Even if we can convince ourselves that qualia are “caused” by brain states (in the sense that certain states always correspond to certain qualia), can we call this a physical explanation if we can’t explain how in physical terms those brain configurations cause those qualia? If that explanation is perhaps unknowable even in principle?

I guess it’s clear from my above comments that I don’t really buy into Dennett’s argument that qualia don’t exist. Without trying to respond to everything he said, his primary argument against qualia’s existence seems to hinge on the interconnectedness between our perception of qualia and our memory. E.g., the person with the inverted color spectrum can’t tell if their color-qualia are truly inverted, or if their memory of past experience of those qualia is flawed. Similarly, the wine tasters don’t know if the taste-qualia produced by the wine have changed, or if their enjoyment of those tastes simply isn’t what it once was. In essence, they are unable to precisely recall the previous taste qualia for direct comparison, and the taste may have changed so subtly over time that they are unaware of it.

Let me address what I think is the simpler example (the inverted color spectrum), although I think these comments are applicable to both. It’s true that we can’t tell if our qualia have changed or our memory has been altered – but so what? Our ability to identify any change is dependent on the reliability of our memory. I can’t even say whether I flipped the channel of my T.V. or whether an “evil neurosurgeon” altered my memory of what channel it was previously on (unless I rely on external confirmation such as someone else in the room or a video recording.) More to the point, suppose I have an opinion on some issue, and then, never having told anyone my opinion, I change my mind. I have no way of knowing if my opinion changed, or if my memory of my previous opinion has been altered – but I doubt many people would be willing to claim that unexpressed opinions don’t exist.

Perhaps Dennett isn’t talking about memory alteration but an alteration in the qualia produced by our memory. (I.e., how it feels to remember, say, the color blue.) He uses the phrase “memory-linked qualia reactions,” the exact meaning of which isn’t entirely obvious to me. He may be saying: assuming we have qualia from our visual input and qualia from past visual input, there’s no possible way to distinguish the two. If that’s the case, I’m willing to concede the point. Perhaps when looking at a blue sky we have a feeling which simultaneously encompasses the experience of looking at the sky and the memory of seeing other blue things. Perhaps it’s impossible to separate out those feelings and say “my memory qualia were changed,” or “my sight qualia were changed.” But if we looked at a blue sky and saw red, we could certainly say something had changed. Specifically, we could say some qualia had changed – assuming our memory is intact.

So, to summarize my point: If Dennett is saying “You can’t distinguish a change in visual qualia from an alteration in your memory,” then I say, “So what?” You can’t distinguish changes in lots of things from altered memory. If, on the other hand, he’s saying “you can’t distinguish a change in visual qualia and from a change in the qualia produced by memory,” then I agree, but so what? If you can detect that some qualia was changed, then qualia in general is real – it’s not necessary to be able to determine which qualia were changed.

Thoughts?

Just a couple quick thoughts…

What is the difference between ideas and feelings? Are they really of a different philosophical kind?

What is the temperature of a single molecule? The pressure? Also unanswerable, but do we not have physical explanations for these properties?

Emergence is a sloppy concept that needs clarification; for many people, to say something is an emergent property is exactly to say that it cannot be explained in terms of constituent elements.

It seems to me that this is the problem with qualia – they are nought but a reification (see Dennett’s writings on folk psychology). That is, we have these sensations and experiences that seem so real and immediate, people make them static, conceptual objects that can then be analyzed and discussed. They’re assigned independent existence and philosophical reality. But, IMHO, there’s no there there; qualia are scents carried on a summer breeze, wispy and fleeting, and ultimately dependent and directly traceable to an organized physical process.

That’s as may be, but at least it dispenses with all this “ooooh, it’s just mysterious” guff I keep seeing in the non- or anti-physicalist literature, by placing it on an everyday, mundane computational footing. I speak of the concept of encryption, in which all we can see is the activity in another device without knowing what it represents. In that thread, I admitted full well that we cannot experience other people’s experiences, but that that is as little a bar to scientific study as our inability to live their life or occupy their exact spatial location.

I’d say not, but can only argue by analogy with that which was just as great a mystery in the 19th century: life. What is the life of a protein, or an electron? And yet, there it is on the microscope slide, a living thing where, 6 billion years ago, there was no “life”. And even then, am I looking at the life itself? No – only the behaviour of the living thing. The non-physicalist objection to qualia from neurons is identical in form to the 19th century vitalist’s objection to life from molecules. I see no distinction (and have never heard a convincing one from non-physicalists) between the principle of explaining life by a spatio-temporal arrangement of molecular reactions, the principle of explaining computer games by a spatio-temporal arrangement of electronic chip activity and the principle of explaining qualia by a spatio-temporal arrangement of neural activity.

But we can: take, for instance, that quintessential mental entity – a memory. Just as we explain digital compuer memories interms of their arrangement of electronic domains in the RAM, chips and buffers, so we can explain human memory in similar terms. Now, we can’t decrypt those memories in other people very easily, if at all, but the principle is similar. Unless one says that computational cryptography is inexplicable (which begs the question of how come you use it successfully every day in your daily life), the computational physics of meat computers need not be inexplicable either.

But the point of qualia is that by definition they have nothing to do with something so demonstrably physical as memory formation. I don’t think you realise just how “mysterious” and ghost-like non-physicalists want qualia to be!

The difference is that judgement of life in others and the concept of computer games is a consciousness-generated activity. If I say that a plant is alive, I’m applying a judgement. These judgements can change with the tides of evolving philosophies and rational foundations. Consciousness is the mode of being, and is the given. For each person, it is tantamount to asking why they exist, at all.

Then we can look at Shakey the Robot judging objects in the room as “box” or “not box”, or bees judging distances as “near” or “far”, and repeat the question “what’s so unphysical about the judging ability itself?” The non-answer I’ve received from non-physicalists in the past is “it’s just not, OK?”.

It’s a conscious act. Assuming it to be physical, is begging the question.

But I’m not just assuming it is, I’m asking why it can’t be. That’s the principle of Ockham’s Razor: If I can present a judging mechanism constituting inputs, processes and outputs, then that judging ability is as physical as that of motion, metabolism, reproduction or whatever can be judged as “life”. Again, doing so does not assume that life is physical either, it merely asks whether some nonphysical elemetn is necessary for life.

Of course, it can be. You could be God and I could be a dream in your head.

Sure, present this judging mechanism.

The mutated concept of ‘life’ : The property or quality that distinguishes living organisms from dead organisms and inanimate matter, manifested in functions such as metabolism, growth, reproduction, and response to stimuli or adaptation to the environment originating from within the organism., is by definition, a physical concept. So it makes normally no sense to ask whether a nonphysical element is required for life. It only makes sense if ‘life’ is a proxy word for ‘consciousness’.

Shakey the Robot visually processed the light from the camera into edges and vertices. IF (no. conjoined vertices) = 3 THEN output “box” ELSE output “not a box”. All of these functions were easily realised on physical circuitry by human designers. If Shakey were somehow able to replicate himself, and the ability to judge boxes somehow aided that replication, one can even imagine the function being realised on phsyical circuitry without human designers given enough time. This would thus be an example of a physical judging mechanism. We could then ask whether such a function could be acheived by biological circuitry. We could then ask what more is needed in a parsimonious Ockham-ish sense to make such an act of judgement a conscious act of judgement.

Not necessarily: in the 19th century there were a great many “vitalists” who would have objected to your assumption of life’s physicality. “Motion, metabolism and the rest of it are just the behaviour of living things, not the life itself”, they’d say. Now, you and I might very well agree that the vitalist position is rather old fashioned and unparsimonious, but it was very popular even amongst resepected thinkers. I only suggest that the non-physicalism of consciousness will one day be thought just like you and I think of the non-physicalism of life today.

As I alluded, if ‘life’ is codeword for ‘consciousness’ or generally ‘interiority’, then vitalism is a reasonable option. Maybe if you can cite a primary source of vitalism, then this aspect can be clarified.