I know I experience things, but do all you zombies?
No, you misunderstand. You can study it, but your results will be objective, impartial, and external. I.e., not subjective.
We can study George Washington for centuries, even learn things about him he didn’t know about himself. But that won’t get us very far if our goal is to experience what it was like to be George Washington. Only being George Washington can give us that insight. And unfortunately that experience was only given to one man, who no longer exists.
My point is that the distinction between the subjective and the objective is real, and as far as I can tell, unbridgeable. And as much as I love and respect science and objectivity, we have to admit that the world we live in, as conscious beings experiencing life, is entirely subjective.
Anything objective that we personally know about was filtered through our imperfect senses, the logic and reasoning structures in our brains that may or may not be functioning properly, and incorporated over time into our unique, subjective worldview.
Science is a tool, a method of bypassing that inherent subjectivity in all of us, in order to make compelling arguments and convincing conclusions about the external, objective world that others can independently verify and confirm.
But by its nature, science avoids subjectivity entirely. Which is the very essence of qualia. Which is why qualia is a philosophical concept, not a scientific one. And, as far as I can tell, can never be a scientific concept.
Even buying that argument, that doesn’t make it “unscientific”, it just means you’d have to (somehow) reprogram the person’s mind into being a copy of Washington’s. Which would be fatal, not unscientific.
Besides, that’s moving the goalposts. You said “studied or understood scientifically”, not “experienced”; a computer model would suffice for that.
My position would be somewhat between Der_Trihs and DrCube.
I would say that right now scientifically studying / having a third-person description of qualia themselves looks impossible. It seems unfathomable how a set of words could fully describe the experience of a color I cannot see.
But we don’t have a good model yet for how brains make subjective states.
IMO there is a critical concept that we’re currently ignorant of; like studying living things prior to evolution, or genetics. I personally hold out hope for such a critical paradigm being discovered and being able to answer questions that on the face of it looked impossible. At the very least I think it should be possible to one day incontrovertibly answer e.g. whether a machine feels pain or just behaves as though it does, questions of that sort.
But even if we knew exactly how brains produce subjective states, would it be the same as experiencing those states?
The only way I can see it happening is copying the state of one brain onto another, but even if that was possible to do (unlikely) how would you be able to express your experience of it if you did not use your subjective experience?
Not sure about this - I think it depends a lot on whether qualia are ‘wired’ or whether they are something that the brain develops as it forms for itself and learns (I suppose that’s still ‘wiring’ but I think it might matter whether it’s preconfigured or if it is developed adhoc). It seems that we know fairly surely that people don’t all think alike (the inner monologue, aphantasia, etc), I don’t think it’s necessarily a safe bet that we all perceive alike.
No, but that’s an issue with how limited our ability to communicate and understand is.
Even if we understood the mechanics of subjective experience well enough to display it as some sort of schematic or collection of equations, it would be far too complicated for a person to actually grasp. I mean, we’d be talking about trying to display the activity of billions of neurons in a way a human mind can actually absorb and understand, which isn’t happening for obvious reasons.
Like all really complex problems we have to deal with such issues with abstractions by breaking it into simpler bits that we can understand.
I don’t know…ask me when we have the model.
I’d put it like this: it seems absurd that we could ever explain subjective experience in a way that we’d consider the hard problem “solved”.
…but then it seems absurd that brains can make subjective experiences. A universe without qualia makes a lot more sense. Yet brains somehow do it; there *is* an explanation. Let’s hope humans may one day grasp it.
Since we don’t know how it works we don’t really know how absurd or not it is.
For all we know the reason we have experiences is that there’s no other way to build an intelligent organic brain. Or it’s just the easiest way and evolution stumbled into the path of least resistance. Or maybe it was a fluke and the universe is full of intelligent species with no subjective experiences. Without a better understanding we can’t say which is more likely.
I don’t think that necessarily follows. Not all known-unknowns look alike. The fact that many posters here (and many philosophers out there) suggest it is impossible to ever have a complete description of subjective experience points to it likely belonging to the set of difficult, strange problems.
We don’t know for sure, but early indications are that that’s not the case. We have intelligent systems that we are pretty sure are not conscious.
Not claiming we know either way, but this adds to the mystery of it at this stage.
No, we don’t. Nothing we have is even close to human or even a bright animal in actual intelligence. We just have chatbots that hypesters play up as being smart so they can make money from it.
I didn’t say general intelligence (or “actual” intelligence). The point is, right now we don’t need to add conscious experience to make machines that can respond to stimuli, so we have no reason to suppose that it’s essential for more complex tasks.
It might be. I’m just saying that starting from the position that it is essential is unfounded right now.
I agree with the first part of that statement. It’s not clear to me what those who want the problem of qualia solved are looking for.
But depending on your definition of subjective, having subjective experiences seems easy. Once we evolved some kind of feedback in our brains to allow part of our brain to “observe” what other parts are doing, then the experience of this is subective - inside our brain only - seems rather obvious.
I do British cryptic crosswords, which involve solving anagrams. Years of doing it has made my subconscious good at it. I can also try to solve them consciously. In the former case I do not experience the solution process, the answer gets handed to me. In the latter case I move letters around. Is there anything different between them except that I observe one and not the other? The observation makes the experience.
AIs today can’t report on their experience in solving a problem, since they have no feedback to show what they are doing. In the 55 years since I started studying AI their “subconscious” has gotten a lot smarter, but they are no closer to consciousness than they were in 1970.
ETA: Which is exactly what you just said, more briefly.
It sounds like an allusion to self awareness, and I’d consider that to belong to the set of comparatively “easy” problems. At the least it doesn’t seem as intractable as where “redness” comes from, where we can’t even fathom how a third person description could even be made.
Maybe “solved” was not the right choice of words, but in terms of your question, it’s the same as any phenomenon: we want greater understanding, and the measure of understanding is our ability to make accurate predictions and inferences.
Now, in terms of subjective experience of course we *do* have an understanding at a certain level; almost all interactions with other humans involve testing our theory of mind.
However, when it comes to how neurology creates subjective experience, there are countless questions we can ask that are beyond the limits of our models eg do you and I see colours the same way, is that person in more pain than that person, if you could see into the UV spectrum what would it look like (or rather: what decides), which organisms feel pain and to what extent, etc etc.
A hypothetical grand model of consciousness would give us some basis on which to answer such questions.
The hardware that allows us to see color is remarkably uniform—same three cone types, same retinal relay, same early-visual wiring in all of us. If evolution suddenly shuffled which signal “feels red,” it would need extra genetic instructions or a persnickety calibration routine, all for zero payoff. Natural selection is a penny-pincher; it sticks with the cheapest, “works-for-everyone” blueprint.
While inner monologues, aphantasia, and synesthesia show that our higher-level software can get quirky at times, they don’t touch the basic “color firmware” we boot up with.
Color-matching tasks and even the way every language carves up the rainbow all point to a shared sensory palette—except when the hardware itself changes (e.g. color-blindness).
So sure, nothing forbids wildly different qualia, but evolution hates paying for fancy features it doesn’t need. The simplest bet is that your red and my red are the same red.
Sure but this misses the point a bit.
If someone were asking me to bet on whether you and I see yellow the same, sure I’d say yes, for the reasons you say and more.
That’s not the same thing as having a model that tells us how experiences are generated and what we need to look for in neurology to confirm that they are the same. (Plus potentially do other cool stuff like “if we stimulate these neurons the volunteer will see another colour not corresponding to any colour the human eye can see”)
We’ve mapped the brain’s color circuit and can poke it to make people see fake blues and greens, but we still don’t have the Rosetta Stone that says, “this exact firing pattern = the feeling of yellow.” Once we can read every neuron in real time while you tell us what you see, we’ll settle the “same-yellow” question —and maybe even conjure up brand-new colors just for shits and giggles.
I don’t think self awareness is necessary. When my dog smelled a tennis ball 200 feet away, she surely had an internal and subjective experience telling her that odor related to a tennis ball which she wanted. Whatever self awareness she had, it didn’t have much to do with this. Any model within the brain built up of sense impressions and memories which relate them to real things is subjective, I think.
Maybe this is good enough for you (and for me) but it doesn’t seem enough for those who think this is a big unsolved problem. We do tons of experiments, and though we are pretty good at figuring out people we are not perfect, and clearly some of us are better than others. But is prediction true understanding? Even near perfect prediction?
If you could predict how a lab rat will run a maze with 100% accurate, do you really understand what the rat perceives and feels running the maze. I suspect you’d answer good enough, and I agree, but I don’t seem to sense it is good enough for those for who this is a big problem.
Neurons are not ‘wired’ the same for everyone - whilst there are some high-level patterns in the way the brain works and is organised and connected to the sensory inputs etc, the precise networked interconnection of neurons is unique to the individual, as it develops in infancy. We have to learn to see. If we ever can read every neuron in real time, we will not easily be able to compare one brain against another because, although the components are from the same kit, the circuit design is a custom job every time.
In the simplest example, we could imagine a set of qualia where the colour wheel is rotated by 120 degrees (so your brain interprets the signal from your ‘blue’ cones as what I would call ‘red’, and so on - blue=red, green=blue, red=green), your experience of colour would be entirely self-consistent in the way mine is - the paints in your paintbox and the outputs of your RGB lights would mix to create the same relative perceived colours on the gamut, as they do for me - your experience would be a hue-shifted nightmare to me, but if it’s all your brain has ever seen, and it works, it’s normal to you and it doesn’t throw any surprises in terms of the way colours are divided, named, mixed etc. My experience would be a hue shifted nightmare to you in the same way.
We’re not born with the same firmware - we’re born with a box of parts that we assemble into working firmware - and we do this on the fly, as the inputs are streaming in.
Sure, the wiring diagram in our heads is personalized, but that doesn’t doom color science. If we can watch every neuron fire while you label swatches, we can chart your unique “input > pattern > word” map. Do the same for me, then work out the translation between the two maps. If yours turns out to be a clean 120° hue rotation, the algorithm will spot that quickly. We don’t need matching circuit boards—just consistent stimulus-to-pattern pairs to translate between our unique neural setups.