All that gets you is the ability to see consistency per-subject between what they’re looking at and what their brain does when they look at it.
Consider these hypothetical results of such an observation:
Subject A, when shown a blue square, consistently exhibits activity in a group of neurons that are arranged a bit like a street map of Tokyo.
Subject B, when shown the same stimulus, exhibits activity in a group of neurons that are interconnected in a way that strikingly resembles the London underground network.
All this gets you is the ability to measure those neurons and say that subject is probably looking at a blue square - it gives you no basis for determining what it feels like to look at a blue square, or comparing the subjective experiences of subjects A and B.
We can’t, of course, directly share another person’s visual experience/color sensations, but detailed neural maps still allow us to compare how their colors are laid out. Show each subject the whole color spectrum, note which neuron clusters fire, and build a wavelength-to-pattern table for each brain. If the order of those clusters and the gaps between (e.g. “blue” and “green”) match across subjects, that points to similar perceptual relationships.
To test this, stimulate the cluster that marks “blue” in one brain and the matching cluster in the other; if both people report the picture tilting toward green, we’ve uncovered a shared color axis even though their wiring schemes differ.
No, because as I mentioned in the hue rotation hypothetical, the relationships between any given pair of colours would be consistent, even if the experience of looking at that colour is not.
The best you could ever do is to say ‘this person is looking at a thing they call blue’.
But also, your evolutionary parsimony argument applies; it just applies harder than you perhaps thought.
Evolution would indeed hate to code for a whole load of different sets of qualia, but that doesn’t mean it coded for one; it coded for none. Instead of giving you a machine that does a very specific job, evolution handed you a box of Lego and said ‘you figure out the specifics’. It coded for ‘DIY’.
We know this because of things like when sight is restored to people who have been blind from birth, but because of some correctible problem outside the brain (say, opacity of the cornea), if they are past the point where their brain is learning how to see, they have a lot of trouble seeing or making any sense of the world - all of the equipment is in working order, but the detailed configuration of working out what the world is, has been skipped and it’s too late to go back and redo.
A perfect 120-degree color swap looks clean on paper, but our eyes won’t let it happen. Everyone’s three cone types peak at roughly the same wavelengths—blue around 420 nm, green 530 nm, red 560 nm—and those signals feed into the same red-vs-green and blue-vs-yellow channels long before the cortex starts adding personalized stuff. Even the severest forms of color-blindness only nudge those peaks a few nanometers, not a third of the spectrum.
Because that early wiring is shared, the “blue” cluster we’d trigger in your brain sits on nearly the same wavelength axis as mine. If hitting both spots nudges us toward green, that lines up with those common anchors; if your inner palette were secretly rotated, simple checks—pupil reflexes, brightness matching, even “cool/warm” judgments—would drift way off the human norm and give the game away. Physiology leaves little room for a hidden color-wheel flip.
They wouldn’t have to. All that would be necessary is for your brain to map the qualia that I call ‘blue’, to a stimulus incoming from your cones that detect ‘green’ - and so on for the others; that would effect a 120 degree rotation of hue (at least in respect to comparison of your subjective qualia to mine) - the physical mapping of the cones to the brain might be fairly consistent from one person to the next, but the network that processes it and attributes qualia, is a one-off custom-build on the back of that, created on the fly, as the brain develops in infancy.
No they wouldn’t, because you’d have no more or less reason to call your versions of colours ‘warm’ or ‘cool’ than I do; it’s not like you would be wandering around wondering why the grass looks such an unnatural shade, because the shade is what you learned means ‘natural’ - exactly the same for ‘warm’ and ‘cool’ colours and so on.
I don’t think most people who have given the matter serious thought would mean that, either. First, it’s not clear that talk about emergent/supervenient properties straightforwardly reduces to talk about the base. If you look at a picture of Einstein on your screen, ‘I see Einstein’ and ‘I see pixels arranged a certain way’ are materially distinct propositions, in that one may be true without the other.
But moreover, the metaphysical thesis expressed here is also far from universally held. The (self-styled) best developed scientific framework for understanding consciousness, Integrated Information Theory, would not hold to such a thesis, for instance, due to its panosychist commitments (although it would probably endorse a thesis of ‘equivalent realization = equivalent experience’). In general, only slightly over 50% of philosophers commit to a thesis of physicalism, or at least did so in 2020; with the recent rise in popularity of panpsychism and even idealism, I’d expect it might be less, nowadays.
Let’s say your cones map to the experience of colours red, green and blue
My cones also map to the experiences of red, green and blue, as I have learned them, but if you were able to see through my brain, you would call those three colours in that order, blue, red and green.
We both look at two light sources - one of them is pure monochromatic yellow; the other is a mixture of red and green that stimulates your red and green cones in the exact same way - and appears exactly the same as monochromatic yellow. You’re seeing the qualia you call yellow.
It does exactly the same to my cones, and so I see the qualia I also call yellow, in both cases, because that’s what I learned to call it.
But if you were able to see through my brain, because my red is your blue, and my green is your red, the intermediate colour you see through my brain is what you call magenta. It actually looks that colour to me too, but that’s the colour I have always called ‘yellow’ and I see it when those first two sets of cones are stimulated equally.
For you, red+green light equals ‘yellow like monochromatic yellow’ and for me red+green light equals ‘yellow like monochromatic yellow’ - but the experience inside my head, for all that I have always called it yellow and it’s the colour of sunflowers and saffron and butter and labrador retrievers, is what you would call ‘magenta’, if you were able to experience it as I internally experience it - by some sort of mind-meld.
We cannot today say whether you and I see the same yellow*
The critical thing is having a model for how the brain generates experiences like color, and the kind of description that you’re summarizing here – where we just see the neural correlates with no understanding of why a given structure is important – would still not be that model. That would be tinkering, not understanding (though still an important milestone on the path).
\* Aside on yellow
I picked yellow for a reason btw, because it is easier to show that you can see yellow with zero yellow photons “out there”. The wavelength of yellow light is 575-585nm. However, when viewing yellow on a screen, zero such photons are going to your eyes. RGB displays will be firing red and green photons, as can be verified by someone sat closer to the screen. Of course, red and green photons striking the retina close enough together activate the cone cells in the same way that yellow light would have.
But my point is just against color realists: we can see yellow without there even being yellow stimuli. And no physical way that seeing actual yellow photons could be the “real” experience and seeing red+green could be a “hallucination”; the input to the brain is the same.
The point is though, there is no privileged version of ‘yellow’ - there is light that has the property of wavelength, but there is nothing intrinsically ‘yellow’ about yellow light. It’s not buttery. It isn’t made of lemons. The photons have not been painted with yellow paint. It’s just light doing what light does. It doesn’t know it is yellow.
‘yellow’ is a label the human brain invented to be able to differentiate things from one another. It’s easy to demonstrate that people with functioning colour vision can differentiate colours. It’s technically possible to see how their brain is making those determinations, but none of this is about what yellow feels like.
If we get to the point where we can model a real living human brain with such fidelity that it produces the same neural responses to a human, given the same inputs, we might have created a machine that understands what yellow feels like, but then it just becomes another mind we can’t see inside of.
Let me be clear that I belong to the set of people that believes it is a big unsolved problem.
The very notion of a machine that has subjective experiences is unfathomable to me, and NB I have a background in both neuroscience and computer science. We can talk about what triggers experiences, and we can talk about how they make strategic sense within a set of complex behaviours (e.g. physical pain works better than reflexes, because a conscious agent can choose to endure pain for a greater goal). How the brain generates experiences though, remains intractable right now. As does the kind of questions that we’d like to answer regarding the nature of such experiences.
Calling out big problems does not mean “therefore, God” or “therefore, souls” though.
I was preemptively calling out the kind of response I often get when discussing qualia, not suggesting anyone was saying this. The position that I was calling out was color realism – that yellow is a wavelength of light and that seeing yellow is necessarily causally connected to the detection of such a photon. But since no-one here is saying it, I’ll park it, I don’t want to straw man.
Okay, big unsolved important problem.
I don’t understand why you think a computer couldn’t have experiences, give a proper design. If you mean a computer running a thousand-fold better of Chat-GPT is not going to have experiences, I agree.
When I worked for Bell Labs, a lot of chips and systems we designed had cutoffs so if the temperature rose above a desired level, there would be a shutdown of at least part of the system to avoid damage. We discovered that some of the chips from the Transmission Division (which handled microwave transmissions) could risk themselves by staying active even when above the temperature threshhold, which might be vitally important. And you don’t need the conscious mind to endure pain. It appears that pain signals can be cut off unconsciously (makes sense, this is evolutionarily advantageous not just for intelligent creatures.) This mechanism operates when someone is given a placebo for an analgesic, and the pain is indeed reduced.
I think it may be an unsolvable problem just because of the sheer number of variables or moving parts - the same problem as analysing neural network based AI models in computing - the operation of the individual parts is quite easily understood and is simple and logical, but staring at the changing value of a single node in a network doesn’t tell you much.
But when you zoom out to the extent that you’re seeing a portion of the network that’s actually doing some operation, there are now so many individual nodes in the scope, all changing at once, that it’s impossible to keep track of what they are all doing - it’s just too much to look at in any meaningful way.
With other systems, it’s not like that - for example a clock is made of atoms, and if you zoomed in to follow a single atom you would learn nothing about horology, but zooming out, the atoms are organised into parts like wheels and springs and levers and bearings - and the action and interaction of these parts can be understood at this intermediate ‘parts’ level, even if you ignore the fact they’re made of many atoms.
The brain (and large neural networks in computing) aren’t so much like that - they are just a huge buzzing heap of the smallest parts - zooming in shows not enough to be useful. Zooming out shows too much to be useful.
I don’t know whether that’s true or not, I make no such claim.
My meaning was simply that I have studied cognition and behaviour both from a natural and an artificial intelligence perspective (though I am not claiming to be an expert in either; I’m alluding to degrees I got ~15 years ago). In both cases I am not aware of any model of how machines can have experiences. We learn more and more about what’s upstream or downstream of subjective experience, but the nature of the experiences themselves remains baffling.
Sure. Nonetheless, a lot of pain signals do get through to the consciousness and we can choose to endure it. This is one of the most popular hypotheses for why we have pain (I think originally proposed by the neuroscientist / surgeon Ramachandran). Knee-jerk reflexes don’t hurt, because that would serve no purpose. Organisms that respond to damage to their bodies in exactly the same way every time likely can also behave reflexively and need not have any “experience”.
Meanwhile, for more complex behaviours, pain is a way of “scoring” a state of affairs as bad while leaving it up to the agent to decide how to respond. It’s not perfect – being in agony from cancer is not useful information – but nothing in nature is.
Does the computer model not able to experience anything assume current architectures, or machines in general? I can’t see how current ones can do it, but I can imagine special designs that can.
As for pain, I was in no way implying that our ability to unconsciously shut down pain means you can’t do it consciously. I don’t know if conscious control uses the same mechanisms, but it is at least an existence proof that it is possible.
Assuming the minds of humans and other biological organisms are wholly the result of physical processes (no spirits, ghosts etc)…
Reasons why a computer might be made to contain a mind:
If the human brain is a machine (albeit a wet one) - that is, a mechanism that is made of physical matter and obeys the laws of physics, then we already have an example that would refutes any argument that machines absolutely can’t host a mind.
Reasons why a computer might NOT be made to contain a mind:
It might just be that all of the development we are doing in computing is on a diverging track from the creation of an entity that could have a mind. That is, it’s not impossible, but we’re not heading toward the actual possibility.
Or, it may be that the state of having a mind is somehow wholly dependent on the physical properties of the type of matter used to make the machine - in the same way water has the property to wet things at STP, but iron does not, and no amount of clever engineering will make iron wet at STP. Maybe a brain only works if you make it from carbon and hydrogen and oxygen arranged into proteins and lipids.
(I don’t think this is likely to be the reason but we ought to consider it).
Just to expand on this a little - it might seem like this is just an engineering problem - that we don’t currently have the ability to monitor all of the tiny moving parts in the bigger picture simultaneously, but the steady march of tech progress will soon make it routine and ordinary to do that.
Maybe, but I think maybe not. We are talking about trying to watch millions, probably billions of neurons at once, and each neuron has multiple connections and the connections aren’t behaving as boolean gates - they have weights and biases and the rate of change is pretty fast.
So the task is something like: to carefully observe, record and understand the subtle, yet rapid behaviour of tens, perhaps hundreds of billions of interconnections all at the same time, over a period of time long enough to try to observe a thought or behaviour being processed.
I dunno. I think this might just remain beyond our grasp.
One hint that qualia are different for different people is that some people experience ‘cross-overs’ between sensory experiences. These crossovers are known as ‘synaesthesia’, and they are very diverse.
Here’s a list.
There seems to be a wide range of possible crossover events occuring here, and some of these crossover events appear to be shared by numerous people (even if the fraction of the total populace is small). But the fact that synaesthesia comes in so many different types suggests to me that qualia are themselves both varied and classifiable. Eventually, when we do have a true science of qualia, we will probably find that there is a whole taxonomy of different types of experience.
Possibly this taxonomy will only be useful with respect to people who experience these curious sensory crossovers; but I am fairly sure we all have some diversity in our experience of sensation.
Once, many, many decades ago, I was foolish enough to try LSD, and that caused me to experience a crossover between smell and colour; this effect did not persist, but seems likely that we could all be induced to experience synaesthesia of some sort given the right neurochemical intervention. Could that help with the classification of internal experiences? Perhaps, or perhaps not. But I suspect it means that we are only at the start of exploring the subjective experiences of the mind, and that there are many possible routes to follow towards the goal of understanding.