I just finished this book, Nineteen Ways of Looking at Consciousness, by neuroscientist Patrick House. It was fascinating, informative, poetic, thought-provoking and unexpectedly funny in places. One contention is that consciousness is a way to integrate complex inputs and coordinate movement and response. Life is movement, as it were. He makes the case better than I can, but for a good overview of what we know and why we think we know it, I can heartily recomment it.
The answer to these questions and so many others asked in this thread remains the same: It’s stimulus response. My outdoor lights have the simplest form of this, turning on the lights when it gets dark out. Add millions of other stimuli together and a neural network to process them and it’s nothing but a more complex version of the same thing. Many of these concepts like qualia predate our modern understanding of biology. The qualia associated with eating or seeing an apple triggers hormones that provide additional stimulus in addition to the mental images we form. If you like apples your brain may produce some dopamine that provides a pleasurable feeling. It’s not some kind of magic, it’s biological processing. I don’t need to know if a machine feels pain or not because I don’t care what a lying machine tells me. Our brains are complicated but it’s just processing of numerous stimuli.
If pain is defined (very loosely, and open to adjustment) as “a response to things that are damaging or potentially damaging”, then while my pain experience might not be the same as yours, or as the robot’s whose code says: “if arm ripped off, say ‘ouch’ and back away from woodchipper”, I’m not clear what you mean by “experiencing” pain, why it’s not clear if the robot experiences it. It seems like you’re suggesting that mimicking the human physiological pathways of pain (or other feelings) is a requirement of consciousness?
It seems to me that there is no difference between a complex mapping between sensation and behaviour and the actual negative experience of being in pain. Are you assuming that there is an aspect of experience that is independent of the physical pathways between stimulus and response? If not, than isn’t experience by definition the outcomes of that complex mapping?
I would certainly push back against such a definition.
Because it is defining a phenomenon in terms of its causal relationships rather than what it is. By this definition, someone in agony due to a cluster headache, indeed on the verge of contemplating suicide because of the agony, is perhaps not really in pain. Because there has been no external stimuli and, as far as we can tell, no apparent damage.
To be honest, this kind of definition is quite telling. Because, when it comes to pain there are 3 aspects: a) the detection of harmful stimuli, b) the unpleasant feeling itself and c) the behavioural response. We have very good models of how (a) works, and we are making steady progress on (c). However (b) still looks intractable. So how convenient to define pain in such a way that it omits (b).
But is the robot suffering? Am I causing mental harm by cutting off one of its limbs or is it the same as me kicking my toaster today? We don’t have a good model for answering these questions, that’s the point. Sure, we can WAG based on the complexity of its response but that’s not a model, that doesn’t tell us anything.
I don’t know why you’d ask / suggest this. I was asking rhetorical questions like whether we could make robots that feel much more pain than a human is capable of, and what the absolute limit of pain might be (or whether it even makes sense for there to be a limit). Of course I am talking about pain in the abstract and not suggesting only human pain counts.
Not assuming, pointing it out.
A knee-jerk reflex is not painful. You have a stimulus and a response, but it doesn’t even involve the brain so we feel nothing.
Then there are of course many phenomenon that do indeed go to the brain but also are not painful; shivering, say, when you’re cold.
So just talking about the stimulus and response is inadequate in a description of pain, because a key part, arguably the most important part, is the pain centres of the brain being activated and the subjective feeling that the brain somehow creates.
And again, this is not some pie-in-the-sky fairy story about souls or whatever. A model of how the experience of pain is made would be worth billions to the medical industry; it could revolutionize our treatment of many conditions and may make anaesthesia unnecessary. It’s why cluster headaches are so interesting to neuroscientists (even while being very sad); a pathology that causes unimaginable pain apparently out of the blue can give us important clues towards a deeper understanding.
We already know pain does not require a physical cause as in the case of amputees suffering from phantom pain. The use of a mirror box to see a simulated hand in motion makes the pain go away. Opioids used to treat pain are often reported to not make the pain go away, instead the patient feels the pain but stops being bothered by it.
I don’t see any mystery here. Pain is not some physical thing that infects our brains, it’s a perception.
The perception *is* the mystery; how, specifically, do brains create or form this kind of experience? What actually is it? Do all animals with a nervous system experience it or is their experience more like shivering? How can we know?
I don’t see how that is important to the question. The human consciousness is far too complex to analyze that way. There is still no reason to believe it is anything other than the complex interaction of an incredible amount stimuli that are not merely direct impulses from our sensory nerves but also the memories we maintain and the processing of our brains based on all that information. That is what it is. Asking how are brains work implies that there is something special about human consciousness, and it ain’t. It’s just a more complex form of a dog’s consciousness, or a gnat’s.
We study lots of individual aspects of those complex interactions, many of which are poorly if at all understood – possibly all of them are poorly understood, because sometimes we find out something about them that changes our previous understanding of particular aspects.
I’m dubious that we’ll ever understand the system completely. But, constructed as we are, why would we limit ourselves to studying the individual aspects separately, and not keep trying to understand the system as a whole – including the subjective aspects of it?
(Especially since considering the individual aspects only separately guarantees that we’re never going to understand any of them even halfway properly.)
That description does nothing to explain the sense of self or core identity. It doesn’t explain what memories are our how the self experiences them separate from the original event. It doesn’t explain the processing of our brains. There’s no explanation of how sensory input leads to an experience.
That’s like saying gravity is just falling. That’s what it is.
I agree that the human mind is a natural progression, and that the differences from dogs is more in degree rather than in kind. But there is something special there. We don’t see dogs forming languages, starting religions, making devices, or any other elements of civilization.
Humans do something different than other animals. It’s reasonable to explore what causes that difference.
I agree. However, the question isn’t “Do we understand everything about consciousness?”. The answer to “Do we even know what consciousness is at all?” is “Yes we do”.
Our sense of self and our core identity is result of each of us having a unique set of experiences in our life. Not simply the external events that affect us, it includes all the processed results from the internal stimuli that our brains produce. I don’t see the relationship to consciousness there. If every human had exactly the same set of experiences processed by our brains in the same way so that we could see any sense of self was an illusion could we not then be conscious?
We know far more about gravity than that. We understand how gravity works mathematically and how it affects us directly and it’s effect on the entire universe. Yet we still don’t completely understand what gravity is. Nor do we need to. It may be of great interest to some, but what we know already is far more than enough, as it was when we knew much less.
I agree that the human mind is a natural progression, and that the differences from dogs is more in degree rather than in kind. But there is something special there. We don’t see dogs forming languages, starting religions, making devices, or any other elements of civilization.
Humans do something different than other animals. It’s reasonable to explore what causes that difference.
[/quote]
Agreed. That progression shows we do know what consciousness is at all. Otherwise we’d be stuck on old arguments that only humans have consciousness.
I don’t know how exactly that answers your question, but I don’t know of any reason to think it would be anything else. Stimuli whether physical like an image of an apple perceived from our eyes or virtually constructed from memory create new virtual stimuli in our brains. What else could there be to qualia?
No, we don’t. What you’ve said seems to me to be (paraphrased and considerably shortened) ‘we know that consciousness is the result of natural evolved processes physically occuring in the brain’.
We do know that. But that doesn’t tell us what consciousness is. That’s a different question.
Even that wouldn’t make any sense of self necessarily an illusion; not so long as it remained possible for one of us to die or to be born without that happening to all of us simultaneously. (And if it isn’t possible for one of us to be born while another one isn’t – who’s carrying the pregnancy?)
If you can see that, why can’t you see that we don’t understand what consciousness is?
We do understand what gravity is at all, and what consciousness is at all. Our understanding of everything and anything could always be limited but that doesn’t mean we don’t understand anything at all. We know that consciousness is an emergent quality from complex interactions of event processing based on external events, memories, and hormonal interactions within our brains and externally with other organs. We know these extend into more complex forms of conscious by including reflection, and recreation of experiences, and all those things that form another concept called intelligence.
That does not explain consciousness, though. It explains fundamental biological function. It explains why we experience feelings and why we have thoughts. Chemical reactions and sex drive even explain to some extent why we are inspired to create art, which is very difficult to reproduce reliably in machine models (mostly because machines do not have the biological needs that drive inspiration).
I have a singularity at the center of my biological machine that is, AFAICT, not possible to duplicate or transfer. The “me” is unique, and I feel compelled to do my best to see that it persists as long as I can. It is not evident that the “me” participates in the reasoning process or emerges thence, only that the part that reasons is aware of “me”. That is the basis of my contention that the function of “me” arises from the survival instinct.
You can assert that that’s all it is, but that’s not a model. It’s clear it’s not a model because we cannot use it to try to answer any of the questions I asked upthread.
And it is not merely the case that we have a crude model and we just need to scale it up, or fill in the details: we have no model right now of what qualia like pain actually are.
There are various tentative attempts at starting to form a model: e.g. Integrated Information Theory, Global Workspace Theory, Neuromatrix Theory etc etc but the authors of these would not claim that they are yet at the point of making verified predictions, and the existence of the many hypotheses is testament to that.
Let me be clear that stating that we don’t have a model is not some jump-off to a bigger point. It’s not arguing for souls, or what the bleep do we know, or to claim it will be forever outside of the purview of science (it does look like a particularly hard problem, but I have been clear to only say appears intractable currently). It’s just a statement of the state of play.
How do you know that? What are you defining consciousness to be and how do you know what level different organisms have? Crucially, I was talking about pain, so since you have the answers, I would quite like to know if a gnat suffers physical pain…does it? Or is it just the equivalent of knee-jerk reflexes?