Yeah, I think all that is required for qualia to work is that they consistently model the real world; it’s easy for me to imagine that my qualia are somehow authentic and faithful representations of basic reality - I see red light, green light and they mix to make yellow light, but any set of labels that consistently work like that, could be used; maybe someone else looking at red light internally experiences what I would describe as the scent of vanilla and when they see green light they experience what I would describe as the scent of roses, and those two qualia combine to form some vanilla-rose perfume that they consistently experience when they see either monochromatic yellow or a mixture of red and green.
All it needs is to map bidirectionally with some consistency, to the outside world; consistency, not faithfulness.
Possibly talking past each other here.
I am not talking about shutting down pain. The model is about pain endurance.
For most of us, most of the time, we cannot voluntarily switch off pain like a car alarm.
People with a neurosensory condition that means they cannot feel pain often have numerous mutilations including a tongue that has been severely bitten hundreds of times. For me, biting my tongue hurts like hell, and I cannot stop that pain, hence it influences my future behaviour.
I can choose to endure the pain of biting my tongue though, depending on the circumstances. And my ability to endure pain depends on the nature of the pain: a beesting is easier to endure than a third degree burn. This of course the result of evolutionary selection pressure, and is the benefit of having this kind of experience, which allows discrimination, over just bare reflexes.
Exactly. Or we could put it this way: all computers are machines, but not all machines are computers. So even if the mind is entirely the product of neurology, it doesn’t necessarily follow that computers can have subjective experience.
To be clear: I am not advancing that position, I am just trying to explain it. Though I’d probably be on the fence on this one. There are surprising entailments either way, and there’s still too much we don’t know.
Is the mechanism that lets someone endure pain any different from the one that temporarily lets us ignore pain? Clearly never feeling pain is not an advantage, on the other hand the ability to shut off pain (for instance from a predator bite) and escape is useful. I’m not sure our remote ancestors could decide to handle pain, but they could have it shut off enough to survive, even if not 100%.
The difference is the subjective experience and depends what you mean by ignore.
For example, a couple of hours ago, I bit my lip. It wasn’t a severe bite; I didn’t draw blood or anything. After 10 seconds or so of the most acute pain, I was able to “ignore” the pain in the sense that I continued with what I was doing. But I could still feel the unpleasant sensation, which served as a good reminder to eat carefully.
This is pain serving one of its key functions.
If I could have consciously chosen to turn off the pain, then I would have done so.
Heck, everyone would do so basically all of the time. And we would all be that much more careless not just eating, but careless in countless situations. We would all have the kind of damage to the body that people with Congenital Insensitivity to Pain (CIP) have.
I just though of a real test for qualia. Can we experience odors the way a dog does? We may not know if our red is the same as your red, but they are similar at least. But when my dog smelled a tennis ball from a few hundred feet away and led me to it, that is a view of the world alien to me. I’m not sure we’re wired to have that kind of experience.
That is a well-known thought experiment, actually; usually framed in the form ‘what is it like to be a bat’,
The question of the profoundly different experience that any non-human animal has is a profound one, and I think no less important than the question of the differences between my experience and yours.
And, circling back, the point at which we can start to answer such questions is the point at which we have a model of consciousness. Yes, it may forever be impossible to imagine experiences that we have never had, but I mean having an understanding of what structures cause what kinds of experience such that we can look at a neural structure and infer that it will result for the agent in a spatial, persistent, non-polar etc experience, or whatever other adjectives we arrive at for these phenomena.
Qualia can be a frustrating topic sometimes, because people often want to funnel participants into either saying we’re close to an understanding of qualia (or even the “qualia is meaningless” position), versus proposing that the mind has some non-physical element.
I think the mind is entirely caused by neurology, but I also don’t think we’re close to a model of subjective experience.
This is exactly what I was going to say. This essay was influential to me as a young nerd, and I think it gets to the heart of the subjective/objective divide I alluded to earlier.
No matter how well you study them, a person can never know what it’s like to be a bat. They navigate by echolocation and fly! Their life is completely alien to us as visual, bipedal animals. And yet, like you said, I think the same thing applies to other people too, just to a lesser degree. The best we can do is imagine what it’s like to be someone else. But we can never truly experience it.
And while I’m no philosopher, this is what I thought “qualia” really meant. Not just “your red might look different than my red, and if we study neurons thoroughly enough, we might be able to tell,” but the idea that our subjective experiences are completely out of reach of science. As soon as you start studying them with tools and labs and sensors and whatnot, it becomes objective, and no longer counts as a subjective experience anymore, at least for the people on the outside doing the studying. That’s why qualia is a philosophical concept, not a scientific one.
But I admit I’m no expert on the topic, just an interested layman.
A bird is really a more interesting example to me. Many of them are pretty smart for animals, and certainly act like they have emotions and experiences. But, they evolved most of their brain long after they diverged from us, it has a fundamentally different architecture than ours. Could we really experience what they experience?
I doubt it. At least a bat is a mammal; for birds and humans, the parts doing the experiencing are fundamentally different.
I was with you until you defined qualia as being out of the reach of science.
Because lots of phenomena once looked outside of the purview of science. Heck, science itself evolved from natural philosophy.
Right now, yes, it seems unfathomable to me how a string of words could somehow encapsulate the appearance of a color humans cannot see. But my lack of imagination and insight should not self-limit our inquiry. We shouldn’t assume anything is impossible, let alone define a phenomenon as being impossible to describe.
Another thing that I almost always mention in this context is that there may be a way to construct a window into other people’s qualia, but we won’t be doing it any time soon. If we can create direct neural linkages between different individual’s brains, we might be able to directly experience each other’s sensations. I am reasonably sure that brain-computer links will become fairly sophisticated within the next century; to experience other people’s qualia would probably requite a double link; brain A to computer to brain B.
Lots of problems to solve before this technique would give usable results; perhaps each brain and Central Nervous System has a different method for interpreting and appreciating sensations; if @Mangetout is correct, then my blue might be your rose aroma, and vice versa. Or the differences could be even more profound. To understand another person’s sensations it might be necessary to interpose some pretty sophisticated translation software, and this could give a false impression of the true nature of the other person’s experience.
And, as Half_Man_Half_Wit suggested last time I brought this up, if you link two individuals together to such an intimate extent, you are effectively creating a new blended entity, and the experiences of this entity would not necessarily be relevant to the experiences of the two individuals concerned, either before merging or afterwards when the connection is terminated.
Trying to merge with a bat, or a bird, or a lizard would be an information nightmare, and the results would no doubt be bizarre and incomprehensible.
I think any such endeavour would be subject to the problems already discussed - that is:
The sheer complexity of observing a billion changing things at once and trying to figure out what they really mean, but more significantly, the only way to know what they mean is to ask - so the brain interface would have to be adapted to learn and detect what my brain is doing when the stimulus is ‘blue square’, and in order to impress that upon you, it would have to map it to what your brain does when the stimulus is ‘blue square’.
So we’d perhaps successfully transfer the concept ‘blue square’ from one brain to another, but not the feeling of what it looks like to look at a square that is blue.
That’s exactly why the concept of telepathy is fantasy, not science fiction. I can’t imagine how the brain would have the processing power or the access to do this translation. The idea behind it is clearly that our thoughts are written on some kind of white board in our brain that someone else can read.
Not hardly.
The closest thing we have to telepathy is conversation. You can literally put thoughts from your head into someone else’s head by transmitting them as translated sound waves. To an alien with no sense of hearing, this would be amazing.
I doubt that it would, by itself; we would need some kind of translation software in order to make the sensations comprehensible. In which case the sensation of ‘shared qualia’ could possibly be an artefact of the translation software.
As another thought experiment, I have proposed replacing the corpus callosum that joins the two hemispheres of our brain together with a much more flexible data connection; in this way we could connect half of our brain with half of that belonging to another person, and vice versa.
It has already been established that we can behave as a single entity despite having two different hemispheres with different skills; to share hemispheres with another person could be achievable, if and only if the shared data could be translated in comprehensible form.
Some at least pretty much are; brain scans can to a limited extent already “see” what people are visualizing seeing or doing. If you imagine a tree, then your visual cortex activates in a pattern resembling a tree for example. Or if you imagine using a hammer, then your motor cortex activates in about the same pattern as it does when using a real hammer.
And personally I’m pretty sure that what we call “thought” can relatively easily be interpreted for another person because IMHO that’s what it’s for. Most of our minds are subconscious, thought is more of a user interface than anything else evolved to shield our actual selves from view and create an easy-to-digest surface presentation.
Sure, people’s deep brain structures are very different from each other, but almost all of that is hidden from conscious access. For example men and women have radically different brains, but they don’t sound that different from each other, do they? I can read a story written by a woman from a woman’s perspective, and it makes sense. Which it wouldn’t if our conscious thoughts had much real connection to our mental processes.
The OP gave us the above YouTube link. But because many videos can be annoying, non-informative, miss the point, or just suck, here is the link to the paper the video is apparently based on:
Color-neurotypical is a good way to put it. People with colour-blindness, and people with synaesthesia, presumable see the world in a significantly different way.