Do we even know what consciousness is at all?

I don’t dispute that we know a lot. The questions I posed, in this and previous posts, are not answerable at this time. If you dispute that, feel free to answer any of them. You may well be world famous.

Again, you’re talking about the triggers for particular brain states, rather than asking how the brain can experience feelings, which is the much harder question (indeed the one which arguably we’ve yet to make the first step in understanding).

In terms of neurotransmitters, I think it’s a red herring here. Unless someone wants to assert that a neural net that is influenced by neurotransmitters must have subjective experiences, and one that is not necessarily won’t, it seems to me entirely besides the point. It just happens to be that in the human mind these things are correlated but in itself, so what?

I should add at this point, that I have a masters in neuroscience and my job involves making software for diagnosing neuropathologies. The reason I mention this, is to be clear that I am very interested in the proximal causes, and effects, of nociception.
I think it’s just important to be clear that the actual subjective experience remains something beyond our current models; it frustrates me a little bit when people handwave, or minimize, this critical known-unknown.

Those are two mutually contradictory statements, surely - if it’s electrical activity, we can measure and record it.

The cursing and yelling is caused by a particular pattern of excitations of neurons, which in turn is caused by signals transmitted from other neurons, which ultimately has its source in sensory impulses gathered at the point of contact. This whole story can be told without appealing to the experience of pain at all.

Perhaps it’s an easier starting point to consider the inverted spectrum thought experiment. How red looks like to you might be radically different from how it looks to somebody else, but that difference may be behaviorally entirely undetectable—they will call the same things ‘red’ as you do, having the same causal connections to the experience. Indeed, there seems nothing in the physical facts of the matter that mandate that red looks this particular way to you: so it might be that this could differ, even if the physical facts remain the same. But then, specifying the physical facts fails to specify the experiential facts.

Most of what you’ve written here pertains to the ‘easy’ problems of consciousness (‘easy’ in the sense that they’re likely incredibly challenging, but we can readily see how a future more complete neuroscience might apprehend them). But what makes consciousness a subject of distinct philosophical interest is the appearance of a ‘hard’ problem: even after a complete theory of the microscopic workings of the brain has been found, there is arguably still an ‘explanatory gap’ as regards the question of how those workings lead to any sort of experience, any qualia, at all.

To the extent that you address this at all, it seems like you want to say that consciousness just ‘comes along’ once the right sort of processing is in place. While that may indeed be the case, whether it is is exactly the main question, everything else is secondary. That there is a difficult issue there is indicated by the inconsistency @MrDibble notes: if qualia are just physical, then they should be measurable, quantifiable, objective. But they very much seem not to be. How is this reconcilable?

(On my own approach, the difficulty arises because science works via models, and the properties a model and the system it models share are limited to structural properties, thus missing the intrinsic ones ultimately responsible for conscious experience. So the gap is merely an epistemic, not an ontic one, but unfortunately necessary.)

You also speak of ‘behavior’ as an indicator of consciousness. But it’s generally accepted that, behaviorally, there is no way to tell whether a system is conscious at all: differences in consciousness need not yield differences in behavior (a locked-in patient may be fully conscious, but as behaviorally inert as any other comatose person), nor do equivalences in behavior indicate similar conscious experiences (think of @Mijin’s example of a robot showing hard-wired pain reactions).

We can’t isolate the electrical activity of a single quale. That’s why we can’t measure it, not because of it’s nature as electrical activity.

“Can’t”? Or “Haven’t yet”?

And if the former, why not?

I simply disagree. There’s nothing to back that up. Consciousness does not exist if we don’t see it in behavior.

Pain is not consciousness, we know of it only because it affects our behavior. A machine can experience pain if it’s designed to, just like our bodies experience pain. Pain is among the least conscious behavior we exhibit. It’s meant to interfere with other processing in our brains. Why can’t that be programmed into a computer no matter how pointless it would be to make a machine react to stimulus in that manner?

So is all electric activity accompanied with subjective experience? If not, what is it that makes that particular activity special? And how does electrical activity lead to anything like a subjective experience? Is it conceivable that the same electric activity could occur without the concurrent experience? Conversely, could the same subjective experience occur on the base of different underlying substrates, e.g. in hydrodynamic or mechanical systems?

So you’re saying that locked-in syndrome is just not a thing?

We could figure out how to do it someday. We’d see the electrical activity associated with a single quale and then have no idea what it means. It’s not going to be as simple as a single voltage blip in the brain. Perhaps if we can measure enough qualia we’ll begin to decode the structure and logic behind it. Perhaps not until we understand the hardware as there’s no guarantee the structure and logic of qualia are consistent over time.

Not sure what you mean. We have nothing but subjective experiences, and all of them are associated with electrical activity. If electrical activity in our brains has the right form and structure, and interacts meaningfully with other activity, then we will experience something.

I don’t see why it would. Subjective experiences are as unique as snowflakes. Two of your own may not be identical.

Ok, I see what you mean. From that perspective, I can see where the concept has some value.

I agree.

Human sociopaths are conscious beings. They may have malfunctioning emotional systems that prevent them from feeling empathy, but they still have an internal experience.

That’s the crux right there. That’s the pholosophical zombie.

And I thank you for helping me understand this topic.

Yes, strictly speaking, the experience of pain is not required to have that string of reactions triggered in a programmed stream. I think the critical point is the question of how that string of reactions got coded in the first place.

That is at the essence of experience.

But we can look at another biological example that most of us will agree can’t involve experience, but yet has reactions: the amoeba.

Single-cell life form that reacts to environmental clues. It detects and moves toward food and away from light. Sensation and reaction. But no neutral system, because it is one cell.

No, because brain activity is behavior. It’s just the kind we can’t read and interpret right now. So a person with detectable brain activity who exhibits no external behavior could be conscious with locked-in syndrome, or not. The brain activity could be random noise or could be meaningful thoughts. We have memory and we can create stimuli internally for our brains. I suppose you could define consciousness to require observable behavior but I don’t see what the point of that would be.

No neural system, but it’s chemical behavior is part of a coherent functional system. The same kind of things happen in our brains at the cellular level.

Take it up with this guy:

Also, if the goalposts are going to move to become that experiences are connected with neurological activity in ways that we may understand at some future time…I don’t think anyone here would dispute that.

No one here was talking about magic. We were simply acknowledging that today we don’t have a model of how a neural net has an experience.

That guy said:

Brain activity could be behavior indicative of consciousness or it could be random noise. Until we can read and interpret the brain activity we don’t know if someone is conscious simply based on the existence of brain activity.

As noted there are different levels of consciousness and they are focused in different areas of the brain. Scientists often break consciousness down into at least two key layers: a fundamental “core” consciousness, tied to brainstem regions like the midbrain and reticular formation, and a more complex “self-awareness,” which relies on higher brain areas, particularly the cerebral cortex. Core consciousness is what gives us the essential sense of being awake and existing in the present moment. Even creatures like frogs, lizards, and birds—thanks to their midbrains—are thought to experience a basic “lights on” state.

However consciousness is probably more complex than “midbrain = core consciousness, cortex = self-awareness.” There are multiple overlapping neural networks, and some creatures with unusual brain architectures can exhibit impressive cognitive feats, like honeybees.

Self-awareness takes it a step further. It’s not simply about being awake but recognizing yourself as a distinct individual apart from your surroundings. Measuring this is tricky, but the mirror test is a popular tool. Animals like great apes, dolphins, elephants (Asian elephants are particularly good at “spot checking” themselves), and magpies have passed, indicating they most likely recognize themselves in a reflection. Some animals may fail not because they lack self-awareness, but because mirrors aren’t meaningful to them, or they rely more on other senses (e.g. dogs and cats: sense of smell is more crucial). So failing the mirror test doesn’t automatically mean an animal lacks self-awareness. I agree with this.

Finally, there’s qualia (the deeply subjective aspects of experience)—one of the biggest enigmas in consciousness studies (and the “holy grail” of consciousness research). Scientists still debate where qualia originates in the brain and whether a complex cortex is necessary to experience it. Some suggest that any sufficiently integrated brain network—whether in mammals, birds, or even octopuses—could produce that rich, subjective sense of reality.

For reasons that are more gut feeling than scientific proof, call it intuition, but I’m confident that plenty of non-human creatures experience qualia. Over the years, I’ve shared my home with a revolving cast of dogs and cats (currently a rambunctious crew of five felines), and I’ve witnessed countless “tell-tail” (pun intended) hints of their rich inner worlds.

It’s the small, blink-and-you-miss-them moments—the kind that reveal a genuine spark of subjective consciousness behind their eyes. They exhibit emotions like jealousy, grudges, joy, and sadness—things you wouldn’t expect from beings who supposedly lack their own flavor of subjective experience. If they’re just faking it, they’d all deserve Oscars for Best Actor, with diplomas from the Meryl Streep School of Method Acting. I wouldn’t bet on them pulling off that kind of Hollywood magic.

When you hear hoof beats…it’s usually horses. Occam’s razor and all that. If a critter acts like it experiences qualia and they have the hardware to support it, then they probably do. That’s the simplest explanation.

http://www.scholarpedia.org/article/Integrated_information_theory

Sure, but the crux is how some electric activity comes to be associated with subjective experience. As noted, the vast majority of such activity, even in the brain itself, occurs without even a hint of a glimmer of awareness. ‘Having the right form and structure’ is just a dodge without any sort of indication of how a particularly structured instance of electric activity could conceivably yield subjective experience. And the fact that we can imagine a complete account of any specific such instance without an associated experience makes the bald assertion that somehow, it sometimes comes along sort of vacuous.

Then what’s special about electric activity such as to produce the kind of experience we have? On accounts that center on, for instance, the information processing (or more general, the function as in functionalism) being performed, a hydrodynamical system could just as well produce my subjective experience. Conversely, an account tied to the specific kind of substrate—a kind of identity theory—struggles with the idea that organisms with a fundamentally different composition, e.g. aliens, could experience pain in the way we do. Hence, the thesis of multiple realizability is widely accepted today, but you seem to reject it; so what’s special about what’s going on in our brains?

That’s a strange notion of behavior. Typically, behavior is the outwardly accessible conduct of action or movement within an environment, and brain activity would rather be its cause. Indeed, there’s a circularity here: if my brain activity is my behavior, who is doing the behaving and how is that behavior originated?

But the point is rather different: there is no known way to connect even this ‘brain behavior’ with any sort of conscious experience. It seems at least possible to give a complete description of how stimuli connect to reactions without being forced anywhere to talk about conscious experience. So, subtracting that experience seems to lead to a ‘zombie world’, in which nevertheless every human acts, walks and talks in exactly the same way they do now. So how come there is conscious experience at all?

This has been asserted, by several people, but I have no faith in that assertion. If the universe is infinite, (for argument’s sake) and there are an infinite number of instances of you out there somewhere, and they are all physically identical down to the smallest quark, I have no doubt that every one of those instances would experience pain just like you do.

Earlier, I said that I can now imagine that there may be entities which behave like humans, because they have been programmed to do so, or have been programmed to learn how to act like a realistic person. We may be surrounded by such entities on a routine basis in a few decades. Such an entity need not be conscious, or feel pain, any more than Chat GPT feels pain or emotion.

But examine such an entity closely enough, and they would be constructed in a completely different way inside - not human at all. The Chalmers/Kirk ‘philosophical zombie’ which is identical to a human, inside and out, is just complete nonsense.