Do we even know what consciousness is at all?

However, making up thought experiments that cannot be falsified because they rely on undetectable differences is not really helpful, and Popper would be turning in his grave.

There really is no comparison; those were explanatory hypotheses that were once reasonable, but turned out to be wrong. But here, we have, as long as the various arguments stand, a question of the nature of our data—i.e. the phenomena we try to explain: the explananda, not the explanans. It’s not like getting rid of phlogiston, it’s rather like, because heat, i.e. the phenomenon itself, doesn’t fit into your preferred explanatory scheme, all observations of phenomena connected to heat must be in some sense illusory. Which to be sure might turn out to be right, but is certainly only a last-ditch effort, scientifically.

Not necessarily. As I say, there are many approaches in trying to break down consciousness and that’s just one of them.

Or, put it this way: AI and knowledge based systems can already equal or better human ability at many cognitive and processing tasks. We didn’t need to completely solve fruit fly behaviour before moving on to working on human cognition. Indeed, human cognition has been simpler to make inroads on, in some ways, because of our familiarity with it.

Perhaps, but there’s the rub: third person descriptions are perfectly adequate until you start trying to describe a first person phenomenon, at which time the essential problem becomes relating the two descriptions.

It would be like defining science as the study of the earth and then trivially handwaving astronomy as unscientific because it is making claims beyond that domain.

In terms of the testability, I don’t think we can even say right now. It could be that humans in the future prove that consciousness is caused by a particular neural structure and so declare (not with 100% certainty, but nothing in science is) that your neural structure is necessarily conscious and cannot be a p-zombie.

My method is probably the only one that could be possible, (physically connecting two different minds with high-bandwidth connections), and it may not ever be possible (and certainly not in the near future). I listed some potential problems (apart from the technical problems, which are significant) and it may be that any two human minds (or any two minds in general) are so different that there is no way to directly compare the experiences of qualia between them.

Maybe everyone’s experience of consciousness is so radically different that it would be impossible to tell if the other person is a p-zombie or not (and to complicate things not everyone is neurotypical, to use the present jargon). Even if we can link minds successfully, it may be necessary to interpose so much translation software that the experience of qualia becomes normalised by the software itself, making sensations appear to be familiar when they are not.

My tentative hypothesis is that the experience of qualia is an evolved solution to dealing with large and complex packets of data as they arrive from the sense organs, and everyone might apprehend those sensations in a different way. If complex translation is necessary to fully appreciate another mind, then that is only to be expected, and it would make comparison difficult - but I am sure it would be a vast and fascinating field of study.

I don’t have much problem with the understanding of consciousness in others its my own consciousness that’s the problem.

I can easily accept that the control system of human and possibly other bodies bodies has evolved in such a way that the combinations of brain states and heuristics is isomorphic to a self referential internal dialogue that models the outside world categorizations of stimulus into feelings of red and wet and sweet. Also that there is a system that reintroduces past stimulus into the present state in the form of memories, and a continuous chain for such feelings. And that finally these internal states can be reported to other human control systems that are coded similarly enough that game recognizes game and calls itself conscious.

The part that I can’t quite get my head around is why can’t this all go on on its own and leave me out of it. Why isn’t it just a forest of trees falling with no one to hear it. There is some way in which my inner isomorphic heuristic monologue is being “experienced” by “me”. Is it’s mere existence enough for it to be experienced? In which case is an eddy in a tide pool experienced by the water since it exists as an emergent property? Is there universal experience of everything throughout the past present and future of spacetime of which my internal dialogue is a isolated component?

As a question its basically a experiential version of the other philosophical biggie “Why is there something instead of nothing?”.

That one is clear-cut, though. Nothing is in perfect equillibrium, which is an extremely unstable condition – like a marble perfectly balanced on the tip of a spear. Therefore, the natural behavior of nothing is to collapse into something. Nothing became something because it had to.

Seriously, these difficult questions are only difficult if you want them to be. I mean, can anyone explain why we dream?

Nonsense from end to end.

Nothing is not in any sort of equilibrium. It is absence of that which might be in equilibrium. The idea that nothing causes something to appear is just silly. IMO YMMV.

Not at all. Descriptions like this may help us come to a fuller understanding, but are far short of an explanation of existence itself.

Because firstly this requires an initial state of nothing existing (and let me be clear that I mean a literal nothing; not empty space or anything like that). Now, there’s no reason for something to exist at that point. But there’s no reason for nothing to exist either. The presumption that we don’t need to explain the initial state as long as there isn’t books and mountains and pogs is flawed, philosophically at least.

Then, the idea of nothing being “unstable” is an empirical claim; you can’t get there from pure logic. So we have to just accept a physical law just was there from the start.

Philosophically speaking, I don’t see how this is better than shrugging and saying the universe just does exist.

Explaining a reason for something, rather than nothing, existing leads one clearly out of the realm of philosophy, because it would require one to posit a prima causa. In essence, “why” there is something is a non-meaningful issue, such as the period preceding the big bang or what is red. Something exists because that is what it does, and there is not much value in exploring a “why” about it.

Similarly, I have suggested how consciousness formed. It is the most logical explanation: natural selection favored the development of beings endowed with a survival instinct, and the survival instinct is, I submit, the thing that underlies self-awareness/consciousness. I got some helpful advice on the matter from a dude named Occam.
       Hence, I suggest that consciousness is not an emergent property but an underlying thing. We have no substantial evidence of one individual’s consciousness being transplanted into someone or something else, and my intuition tells me that is fundamental function is simply not extricable from the hosting entity.
       It gets kind of solopsistic. I can only truly see the question from the inside. I can only assume that you have a similar thing inside you, based on the evidence. Studying something so inherently subjective is particularly challenging.
       One thing that might be worrisome is that we might find a way to make use of a putative technical understanding of consciousness, and it could be ethically difficult. If I am correct, that consciousness is the manifestation of the survival instinct, and we come to a full neurological/physiological understand of it, that understanding could possibly be used to physically manipulate it in ways that could be harmful to individuals. This would make me very uncomfortable.

If you’re happy with, essentially, “That’s a silly question” being the end of it, then good for you. However, let’s not make the mistake of confusing the suggestion that no explanation is possible, with an explanation. An explanation should have inferential or predictive power.

None of that needs experience though, and Occam would agree.
To go back to one of my earlier examples, I can program an agent in a video game that moves away from sources of harm. I don’t need to give it an inner experience to do so.

Plus explaining why consciousness evolved would not tell us what it is, and how brains are capable of having experiences, which is the question at hand. It’s like if we were asking how the body is able to repair minor wounds…what’s actually going on there? The answer of “repairing minor wounds helps survival” would be true but not tell us anything about the mechanism.

The quantum microtubuli theory strikes again, acording to Popular Mechanics:

I enjoyed Roger Penrose’s popular books in the '80s and '90s, and I still like watching him on YouTube. Which does not mean he is right on this, but he sure is an interesting thinker.

I used the analogy to gravity previously. We can measure the effect of gravity to a pretty precise value. We conceptually describe gravity as bent space-time. We explain gravity as the same inertia that affects all motion of mass.

But do we really have an explanation of how gravity, or any acceleration, actually imparts motion? Do we know what mass and inertia are?

We start talking about subatomic particles and proposed and experiments to look for those bits. But subatomic particles work in the realm of quantum mechanics. Yet there is no theoretical structure to tie QM with General Relativity.

Consciousness is similarly elusive.

Speaking of dreams, I had a curious observation for reflection.

Dreams appear to be the fuzzy interface been consciousness and the subconscious. I have heard dreams described as the conscious mind trying to access the subconscious. But my reflection is that it just might be the other way around; dreams are the subconscious leaking into the conscious mind.

Observation: the other night I had a dream, and in it I started interacting with someone. My mind clearly identified the person in question. I felt like I was talking to that specific person. Yet the visual image of that person changed and didn’t look like the same person at all, yet the feeling didn’t change.

This is not an unusual situation for me. I had a dream not too long ago where I was interacting with my father, yet the visual depiction didn’t match my father at any time I’ve known him. All my life, my father had had a beard, except one time when I was about 3 years old. I have seen some pictures of him young. The guy in my dream didn’t look like that at all. Yet I felt him to be my father.

That suggests to me a weird state where the subconscious projection is tapping into the qualia of my father even if failing to replicate the right visual image.

Another different example of qualia entering a dream. I once had a dream that my brother had died and I was going to the funeral. It felt very real, so much so, that as I woke up I was crying. I had to tell myself it was just a dream to settle my emotions.

Many people express having dreams that felt real. I’ve even experienced touch in a dream.

Understanding what is happening with dreams is hampered by it involving the consciousness and subconscious at the same time, when we don’t understand either on their own.

But maybe that interface offers a different set of observations to follow.

(I missed this part)

The answer to your question is that we don’t know why we dream. There are many hypotheses, and most have some supporting evidence, but not to a sufficient level where we have good inferential and predictive power, and hence nothing like a consensus view.

Heck, we could say the same thing about sleep. Yes there are many things our bodies and minds do during sleep, but most of them could be done to some extent during waking, so it remains possible that there is a key function that we are currently unaware of, and everything else is just a convenient time to do X.

Finally, I don’t get what is meant by “only difficult if you want them to be”. Either we have inferential and predictive power (yes I’m repeating this phrase a lot, because it’s the key thing to remember when we are about to claim we understand something).
Whether anyone wants to call something easy or hard is up to them, though I personally find it odd that someone would insist on calling a problem “easy”, or not difficult, when generations of scientists and other academics have been working on the problem and we are nowhere close to a model with predictive and inferential power.

Huh. I read a book called Gifts of the Crow in which Marzluff casually demystified dreaming. It was not a major subject of the book, but he wrote as though it was settled science. And what he wrote made sense (though I do realize that many things in this world are counterintuitive).
       Marzluff portrayed dreaming as essentially a side effect of unconscious mental housekeeping: the brain spends sleepytime sweeping up the experiential debris of waking life, tossing some of it, and filing the useful stuff appropriately. The actual dreams are crossfiring of neurons that happen to be along the path of the maintenance work. And, we know that people who fail to get proper REM sleep tend to develop some serious mental problems, so the explanation fits.
       Now, why dreams seem to have a semi-logical narrative is another question entirely. I would guess that the brain seeks pattern construction, so the dream signals get sheparded into something more than weird abstraction. The cleanup process is working to stabilize the mind, so the dreams form reachable content, and sometimes even get committed to memory.

I am beginning to think that the path to understanding consciousness begins with studying unconsciousness and the how the two states relate to each other.

Except even if we go with “nothing is in prefect equilibrium,” there are two states of equilibrium. The first is balanced on a peak, like your marble on a spear tip. The second is a valley, like the bottom of a well. It’s very stable when there is nowhere lower to go.

Even if the dip is a notch on the side of a mountain, it takes energy to climb out of the notch.

Besides, equilibrium is a condition that applies to stuff. By definition, nothing isn’t stuff.

So natural selection favors a survival instinct, and consciousness is the manifestation of survival instinct.

Ok, but that says nothing about how survival instinct arises. It just says that having it is more successful than not having it.

And it fails to explain how awareness works, what makes experiences and what makes that internal observer.

I have no problem with the concept that the identity is a construct of brain function, that it arises out of the brain assembling data and developing structure from neural connections. And that would definitely make extracting that identity or any experiences from the host brain a challenge. It still doesn’t explain what identity is or how experiences happen.

Well I think he was wrong to do so, and it’s a pity because I don’t want to criticize someone who is studying and writing about my favorite birds.

Here’s a summary of the state of play from the Journal of Sleep Research, published in 2024 (I know; I need to get with the times).

The theory that dreams are a side effect of a kind of memory culling or reorganization has plenty of supporting evidence. But plenty of issues too. For one thing, contrary to popular belief, we don’t only dream during REM sleep. So, much of the neuronal activation noted during REM turns out not to be necessary for dreaming after all. Conversely, there are many people who never remember dreams even when being woken during REM.

Also, a lot of the reason that dreams can have such salience is because areas of the brain associated with emotion and instinct are strongly activated during dreaming. I am not aware of a good hypothesis for why this is necessary for reorganization, and indeed it can often be counter-productive (e.g. if I had a nightmare of eating a cake when a hand bursts out, that might put me off eating cake the next day…so affecting my real world actions with a spurious association).

Of course, we could say this emotional activation is just a side effect. But IMO it’s a bad smell when we’re explaining a central part of a phenomenon with an off the cuff, catch all explanation.

Yes, I think a coherent narrative is something the conscious mind seeks, so shown a set of discordant “clips” we try to form it into a story. IME this is also what makes it difficult to record dreams; you want to make it into a self-consistent story, which confuses things, because a lot of details were not.

Agreed. Both in terms of studying full unconsciousness / anesthesia, and not drawing such a hard line between conscious and unconscious.
In my previous paragraph, I used the word “conscious” when describing someone dreaming, and it wasn’t a typo; many of the important facets of consciousness, like subjective experience and self-awareness, are there in dreams.

I’m not keen on defining consciousness as merely awake / alert to external stimuli.

Yes, there is a difference between the explanatory burden posed by the phenomena, and the explanation conceived to meet that burden, and @eburacum45’s analogy conflates those two—phlogiston and caloric being proposed explanations, while consciousness and the problems it poses as exposed via various thought experiments are the things we aim to explain.

Gravity is also illustrative in the regard that, much as with p-zombies, it was by and large Einstein’s thought experiments (e.g. the famous elevator) that exposed the explanatory burden any satisfactory theory must meet. The thought experiments constrain the shape of the possible theory, allowing us (well, Einstein) to find the theory fitting that shape. The zombie argument should be taken in the same vain: any theoretical explanation of conscious experience that fails to take the constraints it poses into account, or else gives a convincing reason to dispel them, fails to be explanatorily satisfying. The option one doesn’t have is to just ignore these constraints.

And yet, that’s just what you proposed to provide, in the post prior to this one, and with great aplomb regarding its supposed obviousness:

Yes, but what possible use is the zombie argument, when it cannot be tested or disproved? I imagine a time in the future, maybe a few decades from now but probably many centuries from now, when consciousness-wrangling and qualia-wrangling are both common-place activities. In this imagined future world, we can all experience each other’s qualia, record them for posterity and reproduce them at will. We can also link our own consciousnesses together via high-bandwidth data channels, or alternately link them by lower-bandwidth channels in order to converse on a much more basic level.

Bear in mind that humans already use complex language to communicate complicated thoughts, just as we are doing now. I’m sure that ants, or dogs, or even chimpanzees would be amazed if they were able to appreciate just how complex the transfer of in human language can be - they would consider it to be close to some sort of telepathic linkage, and they would be right. But I am certain that we can go far beyond language when linking human minds together, and I am also convinced that human minds will be capable of forming complex and intimate links with artificially-intelligent systems of many kinds. Indeed, this may happen relatively soon, long before we get high-bandwidth links between biological brains.

So where will that leave the concepts of qualia and consciousness? We will all be capable of living in each other’s shoes for an arbitrary period, and with an arbitrary depth of immersion - so we will know whether other people are conscious, and exactly how they perceive the colour red, and so on. But if we apply the Chalmers/Kirk ‘zombie argument’ to my hypothetical ‘high-bandwidth’ scenario, it makes no difference at all -because one of the conditions of the experiment is that there is no physical difference between a world full of p-zombies and one where people are truly conscious.

A thought experiment where there are no detectable differences and there is no way of disproving the premise is no use whatsoever.

I don’t know how one would test or disprove any sort of argument. You can show it faulty, by showing its premises wrong or its logic deficient; or you can try to bite the bullet and meet the burden it imposes. But you absolutely can’t just ignore it, and any theory that does is destined for the bin from the start. Thought experiments serve to tease out the explanatory burden imbued by the phenomena we wish to account for; thus, they set constraints for all reasonable theories, provided they are sound.

This is just a conceptual confusion. Experiencing qualia is just what it’s like to be a certain entity, so one can’t experience another’s qualia without being that other. If you linked brains together, then at best you might create some compound entity with its own subjective experience, but that wouldn’t be you experiencing another’s qualia—it would be a different being experiencing its qualia.

The use of a thought experiment is in guiding theory construction, by curtailing the shape of possible theories. If the p-zombie argument is sound, then no theory of physical properties alone can meet the explanatory burden of conscious experience; so you can either try and show the argument unsound—say, attacking the conceivability of zombies, or the link between conceivability and metaphysical possibility, or whatnot—or give up on purely physical theories, perhaps condoning panpsychism, or dual-aspect theories, or outright dualism. But you can’t just ignore it, not at least if you want to come up with anything worth taking seriously.