That’s certainly what I was aiming at, with the added point that it’s not always an especially faithful representation of the world (which point I only really laboured to try to emphasise that it is a synthesised representation, not some ‘direct view’ of the thing itself).
Our fundamental reality might be nothing more than ripples in gauge fields. Lucky for us, we don’t have the sensory organs to tune in at that level, and frankly, that’s fine. It sounds about as thrilling as a Yanni concert on repeat. Instead, we experience a higher-order version of reality—a bit of an illusion, perhaps, but at least it’s more entertaining than life in Gauge Field Heights.
When it comes to awareness, we humans are an interesting case. Our external awareness is solid enough (most of us manage to avoid walking into coffee tables), but internally? That’s where things get fuzzy. We don’t have a detailed play-by-play of what’s happening with our guts or how our heart rate ebbs and flows during the day. Thankfully, our autonomic nervous system (ANS) handles those backstage tasks so we can focus on more important things like binge-watching the Kardashians.
Consciousness—our subjective experience—is something we probably share with plenty of other animals with somewhat complex brains. But self-awareness? That extra layer where you realize, “whoa, I’m aware of being aware”? That’s rarer. Dolphins, elephants, great apes, octopussycats, and even some clever birds dabble in it, but humans? We’ve turned it into an Olympic sport of navel-gazing.
And then there’s the electromagnetic (EM) spectrum. We’re stuck with a tiny sliver of it—the visible light range (roughly 380–740 nanometers). Evolution didn’t hand out X-ray goggles or gamma-ray specs, so here we are. Bees, meanwhile, are out there raving under ultraviolet lights, and snakes rock the infrared like heat-seeking missiles. But nobody, no creature on Earth, gets the full VIP pass to the EM spectrum party. Maybe it’s for the best. Imagine being bombarded by all of it—your brain might just explode.
Imagine this: a hyper-advanced AI or alien species that can see it all—radio waves, gamma rays, infrared, the works—and handle it like it’s no big deal. Pair that with flawless internal awareness, streaming every cell’s activity in crystal-clear HD. What would that kind of experience— qualia—even feel like? Maybe it’s like swimming through a cosmic light show, or hearing the ultimate symphony of the universe. Or maybe it’s something so out there, we don’t even have the words for it—except, hey man, that’s cool.
Well, the EM spectrum is infinite*, but how many colors an organism perceives is just a function of how many color cones (or equivalent) their vision has. Humans are trichromats (some women do have a fourth kind of color receptor, but two of those receptors largely overlap, so they are not true tetrachromats, the way that most, say, birds are), so our vision is based around 3 color channels and a TV with only RGB channels can produce a large proportion of all the hues we can see.
But yeah I would agree that it’s an open question right now how colorful a world an organism could hypothetically perceive and where the sensation of what those colors appear like comes from.
* There’s a limit to how small a photon’s wavelength could be, below a certain wavelength a photon may have enough energy to form a black hole, and if that’s not the case, then the planck length is at least a hard limit.
But at the top end, I don’t think there’s believed to be any limit.
I mean, I can certainly agree to that in a colloquial sense, but the danger of such wording is that it hides a great deal of philosophical complexity and might be considered intended to be taken at face value: that there’s just these things called ‘representations’ and they pop up in our brains and by doing so we perceive the world (imperfectly, as it were). But to get back to the topic: do we even know what ‘representations’ are at all?
And here things become really very difficult. It’s hard to find an example of a theory of representations that doesn’t either ultimately eliminate them tout court, or collapse back to some agency surreptitiously doing some form of interpretational work to turn something into a representation. Because that’s how we think of representations ordinarily: the word ‘dog’ represents a dog because a person sufficiently familiar with the English language can interpret it as referring to a dog. In the absence of such an interpretation, it’s nothing but a set of scribbles; it doesn’t have any particular connections to dogs at all. It has what philosophers call ‘derived intentionality’, because its referential nature ultimately relies on the symbol-using capacities of an external agency.
So the brain, to get some approximation of the world into it, apparently needs to traffic in particular vehicles—neural excitation patterns, say—that somehow need to acquire representational content, but can’t do it in the way that all representations we’re familiar with do it, on pain of circularity. This is really a very narrow path to take, but there’s very little of the caution needed in navigating the Scylla of eliminativism and the Charybdis of homunculism in evidence.
It’s not like the brain creates a model, and then we observe the model and reflect on the model and how pretty it is.
Rather, the act of experiencing is the model. Our brains correct sensory data, but that data is necessarily filtered by the sense organs, and then by the transmission to the brain and how the brain makes experiences.
Ok, so in this usage, representation is not used to mean some symbol that stands for something else in our experience. Rather, our mental construct of what a dog is is a representation of what that dog really is. Within us, that is the dog, not a symbol that means dog, but we recognize that our experience is a limited form of whatever the thing is independent of our minds.
The word dog in written and oral forms are representations of the form you describe. The thing in our brains is a representation of the dog reality.
But of course, the issue is how what is in our brains—patterns of neurochemical excitations, ‘tiny salty squirts’ into synaptic clefts, etc.—can come to be a ‘representation of the dog reality’, whatever that phrase may denote. As I said, we know in the ordinary case how initially meaningless entities—scribbles on paper—can become representations of something beyond themselves, but that’s not a story we can appeal to with respect to whatever it is that does the job in our brains.
Well yes, because that’s the question about every experience. The brain is doing some neuro-chemical thing, and that creates us somehow.
But all that phrase “representation of dog reality” means is that whatever we experience as a dog is not the dog itself, just the processing of the data by the brain into an experience.
Looks to me like we are trying to run before we can walk. If we want to know how ‘representations’ work in the human mind, we would need to understand the results of half a billion years of metazoan evolution. Find out how nematodes, hunting spiders and primitive chordates view the world, then work towards human representations. Our minds are probably full of half a gigayear’s worth of evolved biological ‘cruft’ that makes understanding human consciousness almost impossible.
It’s hard to ask a nematode how it feels. Or rather, get a meaningful answer.
I don’t see why that should be the case. Understanding the human eye, or the bird’s flight, doesn’t require understanding all of the interpolating evolutionary stages that led to the development of these features, but is perfectly possible on their own terms.
Well, let me know when we can replicate the flight of a hummingbird or the function of a human eye on the same scale with the same level of ability. The flight of a drone is impressive, for certain, but it is different in many ways from the natural flight of a bird.
Human consciousness is the result of a long chain of evolution, including many wrong turns and forgotten adaptations. It may be the case that we will be able to manufacture thinking, conscious AGIs within the next century; but these thinking machines won’t include all the appendixes and spandrels that the human mind retains, so there is no real guarantee that their experience of qualia will be identical to ours.
Understanding something doesn’t entail the ability to replicate it. We understand the sun (at least as regards its basic workings), but that doesn’t mean we can build one.
Stars appear relatively simple; Eddington’s limit is a reliable way to predict the gross behavior and mass-energy flow of a star. But when you look at the magnetic fields that churn and quench in the chromosphere and cause mass ejections, they are far from simple and are quite unpredictable. Human consciousness is like that; philosophers can’t solve the mysteries of human mentation just by thinking about them.
We will need to get down to the sub-cellular level and below and interact with these information flows in real time. Neurotechnology is nowhere near that level of capability yet.
Sure, but those things are also not relevant for the basic functional understanding of the star. Likewise, to understand a computer, you need not know anything about the physics of p-n junctions or whatnot, but of the functional principles governing its organization. So I see no reason to expect that details on the ‘sub-cellular’ level should be relevant to the understanding of consciousness (not saying they can’t, but so far, I don’t see any argument that they ought to be).
Certainly, while my own model predicts the existence of a certain kind of reentrant structure in the brain, it’s entirely independent of the microscopic details of their construction.
Quite probably, your ideas on consciousness are very relevant to the concept of consciousness as a whole. Indeed, they may help us build our own conscious AGIs from scratch, or by developing them over time in a controlled manner. But I doubt that artificial minds will be exactly congruent with naturally-evolved ones, and I doubt that they will experience qualia, and instinctive behaviours, and emotions, in quite the way we do.
Way back in post #104, I proposed a thought experiment that might allow us to experience each other’s qualia by direct neural linkage. If I used this method on Mijin’s pain-robot, would I feel the pain that the robot experiences in just the same way? Would robot-pain be excruciating, or would it be nothing more annoying than a set of flickering numerical values?
We can’t know until we do the experiment, and we can’t do the experiment until our technology is significantly more advanced. Problems in consciousness can’t be solved by theory except on a very basic level, just like problems in astronomy. Observation is the key.
That seems just like a mollifying strategy: it’s just too complex, no use thinking about this now. I think that’s way too defeatist. The interesting questions right now are basic ones: how is it possible at all to have whatever goes on in our brains refer to something out there in the world (if that’s indeed what happens)? How is it that this sort of thing then feels like anything?
This is like sorting out how flight works at all. This doesn’t need a detailed understanding and reproduction of specific instances of flight-capability, but just an effective theory of lift and air foils. We shouldn’t get lost in details that are irrelevant to the basic questions at hand.
(As for my own ideas, if they’re even sort of on the mark, there won’t ever be a program that gives rise to consciousness, and the idea that one could share the qualia of another is conceptual nonsense.)
I agree with @Half_Man_Half_Wit on this one.
There’s nothing inherent to understanding the brain that requires us to study its evolutionary history over hundreds of millions of years. Which is just as well, because it’s largely impossible; our ancestors are gone, nematode worms or hunting spiders would just possess similarities to some earlier stage of our evolution.
There are many teams around the world studying these simpler / smaller brains of course, and it’s a promising approach. But the same is true of studying the human connectome, or replicating human behaviour with neural nets, or many other approaches. It’s a tough set of problems and we’re trying to find insights any way we can.
This is the crux of the biscuit. We have two linked hemispheres to our brain, and they are connected by a high-bandwidth data channel which can be disrupted. My contention is that more than two hemispheres can be linked in this way, and that should allow us to share qualia just as the two hemispheres of our brain share them. To dispute that, we would need to try it.
I know. If the connectome of a nematode, or a hunting spider, or a mouse, contains some ineffable secret, we will never understand consciousness. Or perhaps the linkage between the mind and matter will be decoded in these simple creatures first, and expanded to encompass the rest of the phylogenetic tree.
How many hummingbirds have flown to the moon? How many human eyes can discern a single atom?
We don’t know how the hardware works in that detail but human brains still act in a consistent and often predictable manner. There is nothing in the physical universe we understand completely yet we still function within it.