[QUOTE=Mijin;13222808If I write a program that prints “ouch!” every time I tap a sensor, does it feel pain? Is touching the sensor now significantly different to, say, kicking a pile of dirt?
How exactly would I go about writing a program that feels pain, and how will I know when I’ve succeeded?
[/QUOTE]
You attach a pain sensor to a computer. When enough force, or heat is applied, it issues an interrupt to the CPU, and a subroutine prints ‘ouch!’. That’s just how your body works, and you have succeeded when it prints ‘ouch!’. You have the OS monitor the state of the pain sensor periodically, and as long it detects continuing input, it feels pain, just like you do. And if you mess with the wiring so that querying the device state ends up sending back the interrupt from the sensor, then you have phantom pain, just like your body might if you lose a limb. These aren’t mysteries at all, which is why I feel free to criticize Searle, who may know way more about philosophy than I do, but apparently knows almost nothing about computers.
Pain is a sensation. An outside stimulus is not necessary to cause pain. There are plenty of neurological conditions that can cause pain without any external input. We can also cause pain with direct stimulation (Transcranial Magnetic Stimulation).
The detection of certain stimuli cause pain, in the normal case, but it is wrong to say that they are pain.
So again: how can I make a program that feels pain, and how will I know when I have succeeded?
I pointed out the case where wiring can be used to create pain. A computer can also create non-sensory pain. It’s just emulation. It can use reflection and emulate a pain sensor input when it sees that resources are low. So what? What is the magical quality of animals that can’t be emulated by a computer?
Let me put it this way.
There are at least 3 elements to pain:
[1] The detection of damaging stimuli (nociception)
[2] The unpleasant sensation of pain
[3] The behaviour / desire to avoid pain
The normal model, and the one that most neuroscientists subscribe to, is that input from the nervous system goes to the brain which somehow generates the unpleasant sensation of pain – we don’t know how yet. The memory of this unpleasant sensation causes us to avoid pain in future.
Now if this picture is correct then we can describe the situation with AI very simply. Elements 1 and 3 have been implemented in a rudimentary way (and under the broadest definitions).
Element 2 OTOH; not only does no-one claim to have made any progress whatsoever, we aren’t even sure how to ask the question. We don’t even know how to tell when we have succeeded in our goal. Element 2 is the conscious phenomenon: the qualia.
Taking this problem further, imagine I’m a sadist, and I want to make an android that feels the maximum amount of pain. What is the maximum amount of pain? Infinite pain? What does infinite pain even mean? Essentially, what is pain?
The alternative, and the position of those that state that consciousness is an illusion, is to deny that there is an Element 2. It must be an illusion.
Aside from, incredibly, needing to deny that pain “feels bad”, such people also need to piece together what pain is.
While there are attempts to define pain in behaviourial terms only, it’s difficult: this is because a key feature of pain is that it is not a knee-jerk response.
We can subject ourselves to pain when necessary. So to call pain “the will to avoid something”, doesn’t work.
And defining pain in terms of outside sensation falls for the reasons already described.
Simple self-reporting of the pain condition, regardless of any external stimulus? That is how undiagnosed chronic pain is treated in doctor’s offices worldwide. Ask the patient “do you hurt?” “How much, on a scale of 1 to 10?”
Picture a computer with health-monitoring software that detects an unknown fault that is reducing performance speed below 100%. The computer reports to the administrator via a log that something is inhibiting it from an optimal situation.
If I, as the administrator, measure the same drop in performance I “treat the pain” by trying to find and correct the unknown fault. If I measure 100% despite the computer saying that it is “in pain”, then I recode around the fault to get it to stop reporting (a virtual sugar-pill placebo).
The illusion is [2]. ‘Sensation’ is the same as detection[1]. ‘Unpleasant’ is the attribute which makes it something to avoid [3]. An ‘unpleasant sensation’ is just a subjectively defined term that means you are encoded to find painful stimuli something to avoid, while sounding like it means something of more substance. You can’t identify anything about the process of feeling pain that is not the product of machinery. Biological machinery in the case of animals, but machinery never-the-less. The unpleasant sensation of pain is nothing but the reflection of the machine processing pain input, from the sensor, an emulation of the sensor, reflection of other system processes, or just plain random.
I suspect that you are possibly labouring under the misconception that positing that there is subjective experience / qualia means positing that there is a soul or supernatural essense: it does not.
I just completed a Master’s degree in Neuroscience, and I can assure you there is a great deal of research taking place in trying to understand qualia. They are not seen as magical.
It just means accepting that there are phenomena caused by the brain for which we do not yet have a model. Denying that there are conscious phenomena OTOH is completely unhelpful – all the same questions remain unanswered, they are simply brushed under the carpet.
As I alluded before, you can’t just handwave subjective experience as an illusion because when it comes to subjective experience the term illusion becomes meaningless.
“You didn’t really imagine a plane; it was an illusion of an imagined plane”.
That’s apparently your position. I disagree.
Is pain unpleasant, or not?
If not, what are you defining pain to be?
Ok, we don’t have a model for the extremely complex system used by the brain. Fine. But its a complex collection of known phenomena. I can’t prove that, but there isn’t any evidence contradicting that either.
I agree with that, but its irrelevant to my point.
That’s about some difference between sensation and stimuli. Sensation is derived from stimuli by combining with reflection. Can you tell me what sensation would be otherwise?
Unpleasant is a subjective term. Unpleasant is a general description used for things to be avoided. Pain is to be avoided. So pain is unpleasant. Things are unpleasant or not, and that can be derived from the stimulus. So the sensor has to send a two pieces of data, a number representing intensity, and an enumerator to define which type of stimulus there is, like pain or temperature. Using the intensity value and the enumerator, and reflection, you define a sensation, and whether its unpleasant or not.
We don’t know what kind of system is used by the brain to determine whether something is pleasant or not, but it has no need to be any more complex than I have described. The complexity will all be about the large number of subprocesses used by the brain and how they interact, not the simple processes like sensation of pain. We can already emulate the functioning of simple animals that can sense pain. It’s pretty basic, close to the hardware level. Human brains may be able to create a much more complex model of sensation using much more reflection, and more processing, but its still just a model.
If you can tell me how pain is something else, I’d love to hear it.
Similarly Sonoluminescence is probably the result of known phenomena (or at least: thats what we should initially assume).
That doesn’t mean that we should be dismissive of the phenomenon, or declare it an illusion.
Well…I guess that’s the way Dennett might put it.
There’s definitely a relationship between sensation and memory, people on every side of the discussion agree with that.
OTOH, the reflection element looks like a sufficient condition, but not a necessary one: at some point all sensations are experienced for the first time. There is no evidence that that first sensation must come from external input.
Indeed, there is reason to suppose otherwise e.g. TMS has been used on people blind for life to get them to experience crude vision for the first time.
This set of sentences is rather confused. Firstly you’ve suggested unpleasant is subjective and that pain is unpleasant…therefore pain is subjective, no? That would end the discussion right there.
But assuming that’s some kind of mistake, the rest doesn’t make sense either. What does “pain is to be avoided” mean? There are lots of things I avoid that I don’t consider to be pain. The key thing with pain is the sensation.
As I say, the usual neurological model is quite simple in this regard: pain is a subjective “bad feeling”, and the memory of this bad feeling is what makes us generally avoid pain (but as I say: deliberately expose ourselves to pain if necessary).
Some neurologists, such as Ramachandran, consider that the whole point of qualia: to give us clear input but allow varied output.
If you take the “bad feeling” part out, then we have a real problem linking our inputs and outputs here. The behaviourist model leaves a lot of gaps.
Nobody has claimed to have developed a system that experiences qualia however.
Indeed the teams that have made the most complex behaving systems are often the first to stress that they do not believe their systems really feel pain.
Similarly I don’t think that there’s anyone saying “Don’t press that button: you’ll make it suffer!”
I think it’s also important to say that when it comes to sensation, it’s not like we’re 1% of the way to making a human mind: we’re 0% of the way. Subjective experience is a property of the brain that at the moment we can’t describe in formal enough terms to even begin implementing.
Where did you get the idea that I support the illusion side of this argument?
A list can be empty. It has no contents, but it can still be defined. And the first item can be generated internally. I didn’t say the first sensation came from an external stimulus. Nothing material is made completely of software, so the brain comes with some hardwired programming.
That works for people who are blind because their eyes don’t work. Not for people who are blind because their brains don’t work.
I mention subjectivity because it’s part of your game. Subjectivity is the difference in the way people perceive and process. It doesn’t mean that there is some magical quality to subjective experience, its because everybody’s brain is a little different. So pain is subjective, but still something to be avoided. The concept of something to avoid comes from your description. I would just look at it as a negative factor in an equation, where you are hardwired to seek positive results. Pain is not the only thing to be avoided either. And we avoid pain because of hardwiring. Memories may cause us to do more to avoid pain, but if the basic response to pain wasn’t hardwired, we wouldn’t have the unpleasant memory, and wouldn’t bother to avoid it.
The whole point of qualia is to allow varying results from the same input? Nonsense, the point of qualia is to keep academics from getting real jobs where they have to produce results. First of all, the input is never the same, because its not just the stimulus, reflection and memory also are input, and they constantly change. But its also in the algorithm, because we have a self-correcting and self-improving system in our brains, designed to produce different results, even when the change in input is only the passage of time.
I’ve already explained what the bad feeling is. You don’t need to take it out, you just have to realize it is a product of the processing, reflection, and stimulus. I don’t know what you consider the behaviorist model, I use the model of the way everything works. Predictable cause and effect, without the addition of magic, or undefinable terms used to decribe things not yet known. I may not know some things, but I’m pretty damn sure those things follow all the same rules as the rest of the universe.
Anybody can claim to have a system that experiences qualia, because it doesn’t mean anything. If anybody is trying to make a computer as unaware of its own operation as a human brain is they may end up with one that fools itself into considering qualia to exist. And complexity has nothing to do with feeling pain, as I have already explained, feeling pain is simple. The complex part is how the human brain makes up stories about pain to give it qualities that don’t exist because it doesn’t understand how it works itself.
Because nobody would make a computer that is affected by pain the way humans are. Computers can have their memories erased, restored, and modified at will. If human memory worked that way, pain wouldn’t be such a big deal. We empathize with other people who feel pain because we know the memories of it are inescapable.
And along similar lines, people say all the time, don’t press that button, you’ll crash the system. They say it selfishly, but thats why many people care about other peoples pain, to avoid their own unpleasant feelings, associated with pain in others.
When it comes to sensation we are 100% of the way to knowing how the process works, while only a few % towards knowing all of the vast details used as data, and the incredibly complex processing architecture. And I explained subjective experience, its nothing but the differences in hardware, software and data between people.
To me, it seems like what you are rejecting is the *reification *of consciousness - you are denying that it is a consistent "thing"or unified entity that people are experiencing. Obviously, from what we know about how the brain works, there has to be something prompting the statements about consciousness, verbal statements don’t self-generate in the vocal cords. But you’re denying any unity or locus of self.
In this regard, I’m right with you, I don’t think there is such a thing, either. When I refer to consciousness, I am referring to an abstract entity, a self-created process for making a seemingly chaotic jumble of statements not be overwhelming by creating a time-bound, “continuous” linear"" narrative out of fragmentary memory snippets. An illusion in some ways (or, at least, some of its propwerties are illusions, like continuity), but a useful fiction and a real process i.e. one that uses actual neural activity to do.
I guess not. But I’m willing to be convinced.
What about reflex actions, which miss 2 (and the brain, often) completely. Or, at any rate, only have 2 happen long after 1 and 3?
MrDibble – it sounds like we are more or less on the same page, excepting your phraseology here:
you are denying that it is a consistent "thing"or unified entity that people are experiencing
where your use of “experiencing” is either tautological or unintelligible to me. But I’m assuming that was just an accidental imprecision of language. Then again, maybe we still aren’t on the same page – does rejecting “the reification of consciousness” imply the rejection of “subjective experience” as an intelligible concept?
One thing that also still confuses me about your position is, when you read my OP, what did you think I meant when I said “I don’t believe I am conscious?”
I don’t understand the motivation for the axiom “consciousness is more than computation.” Do you rely on “subjective experience” of consciousness in order to come to that axiom? If so, we have to go back to our other argument before tackling this one.
Because some of the inputs into consciousness are such non-computational things as hormone balances, environmental effects, etc. The brain is bodied, and that contributes to what consciousness is.
No. It’s a rational result of what I know about neural functioning, especially the role of hormones.
Mäke it "that people are saying that they are experiencing"if it makes more sense to you.
No. Just because consciousness isn’t a single entity doesn’t make the result (the abstracted narrative) any less “real”, in the sense that there will be time-stamped memory locations filled with internal statements which form a more-or-less continuous chain stretching back into the recent past.
Dude, I still have no idea. ISTM that you are denying the reality of your own subjective experiences. Which is a bit of a poser for heterophenomenology, but not insurmountable.
TriPolar, I’m lost as to what your point is besides disagreeing with me.
My point is simple: one of the key unexplained aspects of brain function is subjective experience: redness, pain, etc.
At the moment there is no model to account for these phenomena but there is no reason to suppose that they are supernatural.
And it is not merely a scaling issue; no-one has claimed to have made crude qualia.
When you said: “The illusion is [2]”, where [2] had been defined as The unpleasant sensation of pain.
However, in this most recent post, you now declare:
If it’s an unpleasant memory, that implies that pain is unpleasant, and debate over. Indeed an “unpleasant memory” would be a qualia.
Again, if you are conceding that there is a “bad feeling” then I have no gripe at all with your position (other than perhaps the characterization of it as nothing special that computers will trivially implement. Personally I see it as one of the most significant problems still facing science).
Again, no-one saying it’s magical, we’re saying it’s unexplained at this time. There’s a difference.
Wait, what?
So if I were watching someone in agonising pain, if I knew that they were about to have their memories wiped, I would have no sympathy because I would know these memories are escapable?
But more fundamentally, why does the memory being inescapable make us empathize, or change what pain is?
At a wedding you wouldn’t say to the bride and groom “I feel so sorry for you: these memories are inescapable”.
This isn’t what people mean by subjective experience however.
If I happened to know that you and I see red exactly the same way, it would not help at all in understanding what red is.
However, due to the subjective nature of perception we don’t even know that you and I see red the same way, and perhaps never will.
Reflex actions are not painful.
This is part of Ramachandran’s point: where you need a varied response, there is qualia. Where a single response is sufficient, there is no subjective experience (I’m obviously describing his theory in very broad terms here, it is much more complex than this in reality, and supported by research).
There may be some confusion in certain cases because pain and reflexes may be associated.
For example, touch a hot stove and your hand will jerk away: this is a reflex action, which doesn’t involve the brain, and isn’t painful.
Much later, and along different nerves, a message arrives at the brain which again, “somehow”, creates the subjective experience of burn pain.
But note the two things are separate: we could activate the reflex nerve or the nociception nerve independently.
How are hormones “non-computational”? Ultimately why are you not treating them as one more computational process? Everything, including our brains, are made of atoms undergoing some deterministic process. How is any part of that “non-computational”? Certainly hormones can theoretically be simulated by a computer.
OK. Yes that makes more sense to me.
How do you really mean to use the word “real” above? Because what you describe it as, that there will be time-stamped memory locations filled with internal statements which form a more-or-less continuous chain stretching back into the recent past, is a description of something that could be theoretically stored on a hard drive. I don’t have the slightest issue with such a description, but the use of the word “real” seems to be implying some degree of reification, or something.
Yes, I am denying the reality of my own subjective experience. The very concept of “subjective experience” is not intelligible to me. I had to read the wiki page on heterophenomenology. It is related to my viewpoint, but it doesn’t seem to directly address it.
Is this re-statement of my position ambiguous in any way:
*Consciousness is nothing more than some deterministic process with the ability to cause an entity to say it is conscious. Think of a rube goldberg machine. Calling oneself conscious has no meaning other than to point to the existence of the deterministic process. There is no such thing as “subjective experience” other than the ability to relate to the outside world some aspects of the deterministic process.
Because consciousness is nothing more than some deterministic process with the ability to cause an entity to say it is conscious, the specifics of the deterministic process are irrelevant. They could be as simple as a lookup table, or as complex as the MDM. They may need to be relatively complex if passing the Turing test is a requirement to taking the assertion “I am conscious” seriously, but at the end of the day such a requirement is irrelevant to the question of whether there is something more than the mere existence of complex machinery, such as “experience.” (If you define “experience” to be something like “time-stamped internal statements in the context of a chain of…” then you are continuing to describe the machinery, not something that rises above it)*
BTW, after reading some more on the MDM, it seems to me more or less in accordance with my own viewpoint, other than the continual use of the phrase “conscious experience” in a way that appears to take as an axiom that “subjective experience” is a real phenomenon.
You can’t resolve a matter like this using ambiguous syntax.
A ‘bad feeling’ is not a ‘feeling’ which is ‘bad’, it is a ‘feeling’ which has the attribute ‘bad’ associated with it. The same goes for ‘unpleasant memory’. Memories are data, feelings are data combined with processing, and they are in themselves neither good nor bad. You have hardwiring to associate such attributes to memories and feelings, and software that does that to.
Good, but its not unexplained to the extent you claim. The details are not known, but I don’t find any reasonable doubt about the general mechanism. It will be explained eventually as a combination of known processes.
You would be less affected by sympathy or empathy if you know the pain will not have a lasting effect on the person. You may sympathize with someone undergoing a painfull root canal procedure, but you are less affected knowing it will remove some lingering pain, than if the dentist was performing an unnecessary procedure. Same concept.
I say things like that to grooms. I don’t say them to brides because that may cause the groom more suffering. Really, I do say things like that in person, not just on this board.
Subjective def: taking place within the mind and modified by individual bias; “a subjective judgment”
You and I see the same color as red (I have a little problem with blue, so we’ll leave that aside). Red is a range of wavelengths of light. We see the same color and label it as ‘red’. What else you associate with red is subjective, because you have created different associations with the color than I have. We may share some hardwiring that gives us both the same associations to red, or not. Thats all it is.
Please tell me how qualia doesn’t fit my explanations. Perhaps there is something I have not understood. But as far as I can tell, qualia is the result of subjectivity, which are differences in both hardwiring, and the generated processes that give us a complete mind. I doubt we are born with all the processes of a mature mind. I think we are born with generators that will produce those mature processes, based on experience, and they turn out different for everyone. The reason IMHO that this is so complex, is because our brains have limited reflection ability. We can access certain things, but not the underlying processes in detail. So if a computer were programmed in such a way, that it could not analyze it’s underlying mechanism, it would easily hypothesize the existence of qualia as well. I also disagree that crude qualia does not exist. It’s easy to force a program to take different paths based on reflection, or just randomness. I don’t think there is any difference between that and qualia. A related example is the redundant processing used in some spacecraft. Different algorithms and hardware are created so that redundant decisions are made that should align. The typical model is three systems. If one disagrees with the other two, that one is ignored. If all three disagree, lights start blinking. That’s not an exact analogy, but it demonstrates that computers can arrive at different conclusions. We notice the concept of qualia, because we communicate at a high level and try to explain what each of us means using terms we agree on. Computers tend to communicate at a lower level, with all pre-determined definitions, or just sharing the same code.
That’s not my understanding - it depends on the reflex. For instance, in the classic hot stovewithdrawal reflex, it’s the same stimulus that causes both the reflex action and the pain sensation, and it’s the same nociception nerve that starts both. The endpoints are different nerves for the motor vs pain response, but that makes it one branched system, not two separate ones.
They’re a dynamical system. Like any such, you can predict general behaviour but not specifics - think of the 3-body problem, and then multiply that by however many different hormones and outside chemical agents are acting on the body at any one time.
Because they’re not - interacting chemical systems outside a lab (hell, even inside a lab) are classic chaotic systems.
No. You’re always going to run straight into quantum indeterminacy at some point. From environmental/cosmic radiation to chemical mixing, the world is awash with indeterminate systems. All of which impact on a bodied system like the mind.
No. Hormones can *approximately *simulated on a computer. Just like the 3-body problem can be simulated on a computer - but only approximately. Even if you *could *have a program that predicts the position and velocity of every particle in the universe (which you can’t - see Heisenberg), one (inherently unpredictable) quantum vacuum fluctuation is going to throw everything out. The Universe is not deterministic.
I guess the best definition I’d have would be “the existence of which will cause/influence behaviour outside the body” - people act as though their inner lives matter, and do actions based on their inner narrative. Therefore, even if the narrative is illusionary or purely computational (which it isn’t), it still has to be considered in predicting the actions of sentient individuals when constructing our own mental models of their behaviour - so, our inner lives matter to more than just us. That makes them “real” - they can have physical effects which can be studied (not just statements, but also the semiotic clues I mentioned)
No. The memory locations could be, the dynamical system that creates the narrative couldn’t. Like I said earlier, it’s non-computational.
No, it’s still a distributed process. Reification would be if we continued to think of it as one thing and only that thing. But we see the strings, so we don’t consider the puppet an independent entity even though we treat its role in the story as though it were. That’s also the core of heterophenomenology - giving weight to people’s own narratives, but* not absolute authoritative weight*.
Well, using the method, I’d give some credence to your statement, but note that it conflicts with that of other similar persons who *don’*t deny their subjectivity, and I’d analyze your other statements and actions to see if you were always consistent with that stated view (you’re not - what do you mean by “intelligible to me”, for instance, if you have no subjectivity?)
It’s unambiguous, but of course, I don’t agree. Consciousness* isn’t* the string of statements, it’s the *ongoing process *of selecting, updating and editing that string.
See heterophenomenology again. We take subjective experience as a real (meaning - can be studied) phenomenon because we take the* intentional stance* of giving weight to entities’statements of intent/belief regardless of the internal mechanisms in order to make predictions about behaviour. If there is predictive value (technically, I should say “operational value”) in treating inner experience as useful, then it is real. As real as a centre of gravity, even if such a thing has no actual material existence. Note that this is not reification because we are aware of the abstract nature of the concept in relation to any material basis.
Where did you both get the idea that deterministic and computational are the same thing? Computational processes which use non-deterministic input are non-deterministic themselves. And computational processes can model precisely the interaction of hormones, they just can’t predict the the interactions of a specific set of hormones. Just as the 3 body problem is about predictions also. But emulation of biology as discussed in this thread is not about making predictions about the behavior of some biological system. It’s about processing a model of the biology, and creating the virtual behavior in that model.
Is someone arguing that a virtual human brain and a real human brain starting out the same will continue to act identically? There’s no reason to expect that.
Sorry, this thread is long, with a lot of complex language, and maybe I missed something in your discussion, but you seem side tracked on an irrelevant point.