Do we even know what consciousness is at all?

There seem to be different uses of the word “quale” or “qualia”.

One is just an experience. An element of an experience. What it feels like to experience.

Another involves the concept of a universal characteristic.

Another proposes it as the smallest unit of an experience that can understood as that experience. Like the color red.

I’m playing catchup on the philosophy terms you guys are using, so bear with me.

About ten years ago there were a number of experimental results that seemed to show that human subjects sometimes or often decide to act rapidly and unconsciously, before they have time to make a conscious decision. They then rationalise their actions and their choices post-hoc - a kind of post-rationalisation, if you will.

If the unconscious mind cannot appreciate qualia, then these decisions are not affected by them. Certainly a lot of actions are executed very rapidly, and we are running on ‘auto-pilot’ much of the time, especially when playing sports or performing well-rehearsed actions. Does that mean that qualia are shoe-horned in after the fact? I feel pain, when I burn my hand, but by that time my arm has already retracted.

I think it’s possible that the unconscious mind may be affected by whether the conscious mind is pleased or displeased by something, or considers something important (and vice versa), even though the conscious and the unconscious minds don’t fully understand each other. There’s communication between them, though it’s in a form that the conscious mind doesn’t understand.

My dreaming mind can’t read; but it knows that reading is important to my conscious mind. I know that because it’s common for there to be reading matter in my dreams, but I never actually succeed in reading any of it.

I think the discussion of intentionality muddies the waters a bit here.
Even if it turned out that all our decisions happen at the subconscious level and the consciousness is “fooled” into thinking it made a decision, tells us nothing about what pain is, or answers any of the accompanying questions.

As I say, it’s important to emphasize the practical predictions and inferences we can make. Because it’s really tempting, with many phenomena, to claim we have a rough idea and just need to fill in “the details”. Asking the practical questions gives us a better scorecard of how far along we are in answering the known unknowns.

I think the alternative meant was the objective reality, not some “objective experience”. The contrast was the internal experience versus whatever caused the sensory signals.

From what I gather, to you a quale is an experience. Does that mean any conscious thought, or just experience of a condition derived from sense data?

I’m not following you. Yes, it is possible to conceive of a set of reactions that are a response to bumping something without a pain signal. However, the specific reactions aren’t unconnected. The jumping up and down and cursing and screaming are responses explicitly because of the damage signal. If you bump your toe lightly, you don’t have any of those reactions just because you bumped your toe.

Hmm, sounds like the Libet experiments, but the original was performed in the late 70s I think… Anyway, there have been more recent studies casting doubt on this sort of interpretation:

Just to avoid some potential confusion, I think you are talking here about intentional, i.e. deliberate, action, right? In the philosophy of mind, the word ‘intentionality’ usually denotes that quality of a mental state that makes it about or refer to something external. I.e. if you think about an apple, that apple is the thought’s intentional object (or, in Brentano’s formula, the apple has ‘intentional in-existence’ within the thought).

Qualia are the qualitative component of experience: if I see red, the redness I experience is the associated quale; if I feel pain, it’s the pain’s unique painfulness. In Thomas Nagel’s phrasing, they’re ‘what it is like’ for me to be having that particular experience. Hence, qualia are associated with any sort of conscious experience, not just that derived from sensory data.

Sure, but there is no reason for the ‘pain signal’ to feel like anything. Your reaction is fully determined by the neural lightning storm triggered in reaction to the stimulus, and that’s just a (long, complicated) chain of physical cause and effect. What role is played by the pain quale, and how does it come about? Generally, we don’t expect physical causality to be accompanied by phenomenal experience in cases that don’t occur within brains, and it seems entirely possible to imagine the sort of causality that occurs there to also fail to have any phenomenal content. (That’s in the end why we’ve often had the statement in this thread that we can’t even be sure that another person is conscious.) Yet here we are, merrily experiencing away.

Indeed, since we believe that the physical is causally closed, it’s hard to find any role for qualia to play at all, which has led to the doctrine of epiphenomenalism: qualia are just by-products of our mental processes, but don’t have themselves any causal role to play.

Yes I was, thanks for the correction.

My concern is (inadvertently) deflecting on to simpler questions than qualia to prematurely claim we are close to an understanding of consciousness.
Understanding how we make decisions, and the interplay between the conscious and subconscious is at least a problem we can make headway on (a scientific model doesn’t need to explain everything, so even though this model would refer to consciousness, it doesn’t necessarily need to explain consciousness to make useful inferences), versus something like how an arrangement of neurons has an experience.

(Rather pleasingly—to me, at least—the two senses of ‘intentional’ turn out to be linked in my model: an agent takes an action A if it can establish—i.e. prove—that doing so furthers a certain goal G—a desirable future state of the world. The proposition embodying this turns out to be self-reflectively equivalent, in the sense of the modal fixed-point theorem, to one containing just that goal. So the state of mind producing a certain action turns out to be about that action’s intended result.)

Now that the words “intentional, i.e. deliberate” have been mentioned may be the time to include the volition aspect into this discussion. A living being – particularly but not limited to a human – wants, or does not want something. But you cannot want to want something. And a computer does not want anything at all, as far as I can tell.

Ok, sure, why is pain pain? Yes, that’s a reasonable question. I mean, the signal is a damage alert, perhaps also a conditioning signal. It makes sense that if we are conscious and experience anything, especially if that experience plays a role in our future decision making, the signal should be strong and a clear message. Even more of the signal is to work on a non-cognitive level.

But that that explanation still requires us having experiences, and doesn’t explain those experiences, those qualia. Qualia are what they are. Next question.
:grinning:

Well, maybe, but not the one under discussion. Rather, that question is: as the physical chain of causality leading to a pain reaction does nowhere necessitate or imply a pain experience, are such experiences irreducible to physics? If you specify all the physical facts about the universe, are the experiential facts still open? I.e. the zombie argument.

As something of a “psychonaut” I have experienced a fairly large range of mind altering chemicals.

Ketamine, particularly, is used in third world countries as an anaesthetic. It is not an anaesthetic like, say, opium, it is a disassociative, more like DMT (vastly different chemicals, though)

Disassociative chemicals fool the brain. The user experiences pain (and other standard body functions) but does not percieve them as harmful or dangerous.

So, I support your position, but would specify that the experience of pain is not linked to any physical factor. It is chemical.

Ok, but in this context, chemical is physical. The brain is a electrochemical system; any combination of these still constitutes the physics of the system.

Regardless, this misses the key problem for a theory of consciousness. I mentioned upthread the “pain centres” of the brain; this constitutes areas of the cortex, thalamus, amygdala and other areas, correlated with the feeling of pain.
It’s a bit of a complex picture, frankly, so, for arguments sake let’s imagine that it were just a tiny part of the amygdala, and, in keeping with your observation, let’s say it can be circumvented by blocking receptors, chemically.

Does any of this answer how a set of neurons can have an unpleasant experience?
No; the description is still at the point of pain being a button in the brain that various things (like pain receptors) can press, and other things (like ketamine) can prevent being pushed. But how can such a button exist…what does it actually do?
How does data about damage become an experience?

It is hard to argue with you. My experience is most likely far, far different from yours.

I can’t, for example, describe my experience after smoking Salvia Divinorum, huffing Chloroform, smoking DMT. Doing LSD. LSA. Using MDMA, MBDB, MDA. A bunch of of varied chemicals, many of which are not at all well known.

I mean, like who knows the difference between MDMA and MDA? Only us users.

The button just simply does not exist. It doesn’t need to exist. Why should it?

I am something of an evolutionary prescristtivest despite my fairly nihilist world view.

As @Mijin points out, chemistry is just physics, in the sense that the physical facts regarding the various chemicals at play fully determine them and their interactions. So when you’re under the influence of assorted enhancements, what you’ve done is change the physical facts of your brain; that this brings with it a change of the experiential facts is unsurprising on a materialist picture of the mind.

The problem is that it seems like the experiential facts might change—might even be absent completely—without any change in the physical facts. If that’s the case, then the physical facts fail to determine the experiential facts, and the latter are in some sense ‘extra’—a completely separate substance, as in Cartesian dualism, or a set of additional qualities, as in property dualism, or something of the sort.

I should also clarify that this isn’t a position I hold. I don’t think there’s anything but physics going on in the brain (and elsewhere). It’s just that our ability to derive consequences from (physical) models is limited to the structural (because all a model is, is an instantiation of a given system’s structure within another system), and hence, misses the intrinsic features that gives conscious experience its particular qualitative nature. That’s why we can only ever know the intrinsic qualities of that one particular thing that we ourselves are, and are limited to the structural or relational with respect to everything else.

I think that falls back on the larger question of how we have experiences in the first place. Until we understand how experiences work, we can’t begin to diagnose why one experience is the way it is.

The pholosophical zombie is a mess. It proposes a body that reacts exactly the same as a regular human, but doesn’t feel anything.

I think that is a self-contradiction. Part of being human is the experiencing. How could a philosophical zombie react the same way?

You say nerve signals and chain of causalities, but take, for example, the stubbed toe. The cursing and yelling is a reaction to the experience of the pain. Without pain, a person wouldn’t do those things. It’s the unpleasantness that drives the reaction. Why world a philosophical zombie do that?

Possibly it wasn’t the best attempt at a metaphor. The point is simply that the upstream causes of pain, and the downstream effects of pain don’t explain the middle bit. And it’s the middle bit that’s the hardest to explain.

Pain is such a natural thing that we take it for granted; it’s hard enough just to appreciate that the pain you’re feeling on your finger, say, really resides in your brain.

But how a brain makes a negative feeling remains mysterious. If strong AI is possibly, it implies that I could cause more suffering than a single human is capable of experiencing, more suffering than the Holocaust, by just running some code millions of times. It seems absurd to imagine just a set of instructions being painful for the code itself, and yet, we know matter is capable of being in pain because we are. Somehow.

Come at it from the other side.
I can trivially write some code that displays the message “ow!” when you press the space bar. And we can trivially give a robot a knee jerk reflex.

In computer games you can have agents that act autonomously and could detect, say, bodily damage due to fire and learn to escape that damage.

Right now, we can be pretty sure that in none of these examples are they experiencing actual pain. But what thing are they missing?
How will we know when we’ve created something that genuinely feels pain as opposed to acting as though it does?

Maybe we overdo it and make a machine that feels the same level of agony as a human being burned to death the first time it has a paper cut.

Indeed, we may well want to ensure that advanced robots that we create don’t feel pain, even though they try to avoid damage to their bodies…how do we ensure that, and how will we know that we were successful?

These are all rhetorical questions of course, because we simply don’t know right now.

They are really not though. We do understand quite a lot. The most important thing is feelings, which are largely chemical (dopamine, oxytocin, etc) and hard to replicate in a computational context.
       What makes us happy? That is a thing that is difficult to code into a machine, just like thirst or horniness or laughter. These things might possibly be simulated to some degree, but it would be hard to determine whether the simulations were meaningful. I suppose the only way to be sure is if a machine were to spontaneously decide to paint something or compose a song.
       What we do have is human sociopaths, which give us pretty good clues about how to study rational machines for meaningful signs of consciousness.
       The existence of the emotional/sensory aspect provides us with the means of empathy. We can analogize our experience and project/infer it on/in others. Empathy informs our decision making in ways that do not align with pure rational choice. Sometimes a machine makes better choices than a person, but mostly not so much, as far as the people are concerned.

I’m going to try to present my view of human consciousness in some detail beginning with this post. I’m dividing out major areas of discussion that I’ll refer to using loosely analogous computer terminology; hardware, software, and application. Hardware is our brains, software is the internal processing of our brains, and the application is the end user interface, we call consciousness.

There are some things we know about the hardware.

The brain develops from a general plan in our DNA. Each of our brains develop into the exact functioning organs they are as a result of all the constraints of their growth from the rest of our bodies growing from the same plan, and the numerous external factors from the world around us. Just as our DNA doesn’t determine the exact size and shape of our little fingers our brains develop uniquely while still fitting in a large and incredibly complex system of biological interactions.

While we see that brains have multiple physical components and all brains are composed of the same set of components each of those components and our entire brain develop under constraints so that none of us have exactly the same hardware in the sense of no two snowflakes being identical.

Some of the hardware is hardwired and dedicated to specific functions. Some of these things are low level cellular operations such as production of hormones to affect other organs that keep our bodies functioning. We are slowly working out some details in how these function allowing us to create medicine.

The hardware also enables the high level processes we are mostly discussing, those things that allow us to think, to reason, to communicate, to learn, to create. It’s not clear what the dividing line between hardware and software in our brains is.

We also know our DNA has emerged from billions of years of evolution. Our brains emerge from our DNA, and our DNA has emerged from evolution. This is what ‘emerging phenomena’ are. Those things that are not apparent in a non-determinate complex system coalesce to create an independent system of more specific functionality.

The software, the underlying system of processing by which our brains perform high level functions is what we know the least about. We don’t know the clear dividing line between hardware and software but we can see that brains are processing information beyond innate abilities that could be hardwired. For instance, at whatever level our ability to use language occurs we use languages that do not all work the same way. They use words that are different not only syntactically but semantically, and grammar based on different sets of rules. We can speak and comprehend more than one language and even create new languages. This is characteristic of ‘soft’ processes.

And we know next to nothing about the details. Maybe just nothing. From brain scans and biochemistry we can detect electrical and chemical communication within the brain, but we can’t interpret any of the messaging. We can tell when brains are using different components to process but we can’t examine or analyze the messages in any way. It all exist below our ability to discern. Anything we seem to be able to tell about how our brains process may be hidden deep under layers of processing that obscure the actual processing.

I think this is where the implementation of the concept of qualia is found. We can’t observe a quale, we cannot measure it of record it. It is electrical activity in our brains that affects our conscious thoughts and feelings. We don’t know anything about it’s structure or the rules it is constrained by. And it’s different for every person. We understand the communicable form of some of the a quale’s contents. That is result of all the processing that happens below the surface.

And it’s not only different for every person, it changes over time. It seems obvious to me this happens in growing from a newborn into an adult. I think it also is a result of learning, our brains become more efficient at repeated processes. Even if we could record our thoughts at some point in time we don’t know for sure we could replay in a meaningful way in the future.

Neither the hardware, software, qualia, nor the processes are consciousness though. Consciousness is the result of all that processing. We do know some things about consciousness because we can observe how it exhibits as observable behavior. Human consciousness behavior has both consistency and variety among people. We can classify and test the behavior and it has enabled us to understand what we do know about consciousness. We can see the different functions compose consciousness, and understand them in comparison to

IMO it isn’t necessary to we need to know how the underlying system works to understand consciousness. It could certainly help, but I do think that consciousness of the kind humans exhibit could be developed in machines, and certain aspects of it are found in other animals as well.

Everyday people use computers with minimal understanding of the hardware or underlying software they are using but could understand everything about the application they are using. Many an application has been reverse engineered without any knowledge of the underlying system that it depends on.

The form of the origin of the software we use on computers and the code that is actually executing can be disconnected as well. Some applications may include an observable form of the software that the computer is running, but most are developed in high level languages that are compiled into machine code. We can examine that machine code and see how it works but from that we cannot always determine what high level source code was compiled to produce that machine code. The application emerges from the processing of source code into executable object code and the source code no longer required for execution.

In future posts I’ll try get more specific about the components of consciousness, memory, abstraction, reasoning, language, etc. And then why I think machines can have consciousness at the level of humans, and that consciousness has levels that are demonstrated in biology until we get to Level 0 - the consciousness of a rock. The 0 means 0 times all measures of consciousness, not that I think rocks have consciousness.