Well at the point where we are able to test philosophical ideas, we stop calling them “philosophy”. So it’s true by definition that philosophy is purely abstract, but that doesn’t mean that philosophical discussion never leads anywhere.
Also, a thing to bear in mind for philosophy, is that while it’s often not be possible to prove any particular proposition, it’s often pretty easy to find flaws in one. So, IME, the people most dismissive of philosophy, are the people who want to espouse a particular philosophical position while handwaving or just ignoring solid philosophical arguments against that position.
This is often religious apologetics; who want to use first cause arguments or stuff about God being necessary as a foundation of logic while being dismissive of all the arguments against such lines of reasoning.
But unfortunately it is also sometimes “celebrity physicists” e.g. Hawking had many philosophical opinions and he was also quite dismissive of the philosophers, which is not a good combination. I think a lot of what he said could be shot down.
The two parts of this statement seem diametrically opposed to each other to me. If it’s impossible to tell the blind man what red looks like, then knowledge of what red looks like is radically unlike knowledge of anything else—because for anything else, from how a computer works to how to ride a bicycle, knowledge can easily be communicated. So if it’s true that you can’t tell a blind man what it’s like to see red, then consciousness differs from every other area of inquiry.
Again, the first and second part are in direct conflict. If consciousness is amenable to scientific analysis in the usual way, then you should be able to tell the blind man what it’s like to see red, by simply teaching him that scientific analysis. “The brain works in mysterious ways” certainly isn’t a good answer in any case.
Yes (although I’d wager my thoughts on what that means differ quite a bit from yours). But I also think this approach has big problems we won’t get around by vaguely handwaving at the complexity of the brain.
Well, that was sorta the point: to try and showcase a system that works very differently from a brain, yet serves to generate the same behavior, to test the claim that ‘conscious is as conscious does’, i. e. whether something is conscious is entailed by how it behaves.
This isn’t the Chinese Room, by the way; Searle was concerned with intentionality, not phenomenal experience, which is a different, but also difficult, problem in the philosophy of mind. The Chinese Room is allowed to rely on all manner of processing in order to produce answers; the lookup table merely looks things up.
Also, the physical realizability of the lookup table has no bearing on whether the argument succeeds; we’re solely interested in the truth of the counterfactual ‘if such a lookup table existed, it would be conscious’, which, like ‘if it rained tomorrow, I would get wet’ is true or false independently of whether it actually exists, or actually rains.
By having each entry in the lookup table be the entire history of the conversation up to that point. Again, trivially not physically realizable, but that just misses the point.
Good. So you hold that consciousness is entirely determined by behavior. Then, are fish conscious? What about bats? What about a thermostat? What behavior is sufficient to decide conscious experience? Only human-equivalent behavior?
When does behaving a certain way need conscious experience? Because for any behavior at all, it seems easy to come up with a system that shows it, without that system being conscious. Right, we can’t keep the entire bundle of such behaviors in mind, and similarly see that there’s nothing conscious about that, so maybe, something unknown is doing we don’t know what and the whole thing suddenly becomes conscious, but is that really a satisfying answer to you?
No. It’s asking whether, in order to play chess, a system needs to experience itself playing chess. Playing chess manifestly is just behavioral: chess is defined by behaving a certain way. But you’re arguing that this behavior suffices to determine whether an agent is conscious, that is, whether it experiences something while showing that behavior. This goes beyond simple behavioral analysis, and utilizes a hypothesis that anything that behaves like a conscious being must itself be conscious, which you haven’t so far done anything to justify.
I don’t think that’s really historically accurate. Early philosophical thought was, at least when it comes to metaphysics, mostly concerned with what the world is made of—take Thales, and the idea that everything is water.
In order to point out that this doesn’t alleviate the explanatory burden: if there always is something, we still have to explain that something. The idea that the world is eternal does not do away with the question of why, or how, it exists.
The idea that ‘nothing’ is, in some sense, impossible is very common throughout the history of philosophy. Parmenides is the first that comes to mind.
Feedback isn’t the same as conscious experience, though. A control loop changes its state based on feedback, but it does so (presumably, although if you’re a panpsychist, you might beg to differ) without any conscious experience.
And everything in our mental problem solving that needs feedback can be formulated in terms of such control loops; complex ones, maybe, but still, there’s no indication that conscious experience is a necessary consequence of complex control loops.
The evolutionary advantage of adapting to the environment by incorporating feedback into our behavior, yes. The advantage of being conscious of the whole process, no.
This seems confused. Non-conscious animals don’t feel pain (as they don’t feel anything, ‘feeling anything’ being what being conscious means), so they can’t react to pain at all. They react to stimuli, in the same way a thermostat reacts to temperature. The thermostat doesn’t feel hot when it decides to lower the temperature. Why do we? How do we?
Exactly! Even complex behaviors don’t need any conscious attendance. Sleepwalkers can perform highly complex tasks without being conscious of them. One could imagine evolutionary pressure shaping the behavior of sleepwalkers to be indistinguishable from that of a waking human; so how can evolution select for conscious experience?
Consciousness is not that big of a problem. I could observe it in my Nefurtari, or, for that matter, that one crow that lands on the tree in the back yard and squawks at me with an irritated tone.
It is entirely organic, rooted in the survival instinct and taking the form of the ectoplastic phanticulum. Living things developed it as a side effect of natural selection, though most living things compose no treatises on the subject. We probably will eventually build devices that can exhibit the symptoms of self-awareness, but it will not be consciousness because it cannot be holistic without the basis of need that creatures possess.
I never said that feedback and consciousness are equivalent - I said that consciousness is a form of feedback. Do you modify your thoughts based on examination of previous thoughts? For instance, do you come up with an idea, see flaws in it, and modify it? Voila, feedback.
I repeat my question about your experience of the subconscious. The ability to perceive our thoughts internally is what makes us aware that we are thinking beings. Is your subconscious aware that it is thinking? Is it aware of anything about what it is doing?
Maybe the right take on your blind man question is whether a non-blind person can imagine not knowing what red is. Likewise we as conscious people have a hard time understanding unconscious thought, thus the pathetic fallacy.
Not just feedback from the environment but feedback from our mental processes. We as conscious beings can teach unconscious beings complex behaviors, but they can’t teach themselves anything.
Really? We know that we feel pain because we are conscious, but animals certainly can feel pain. Any dog owner knows this. Reacting to stimuli is different. Dogs and other animals will avoid painful things. That’s obviously evolutionarily advantageous by itself.
We have plenty of internal feedback mechanisms in our bodies outside of consciousness. That’s why you can’t hold your breath and die for example. We do temperature regulation without using higher functions.
So can dogs. I trained guide dogs, mostly socialization, but when they really got trained they could do things like refuse to let its partner cross the street when there was danger. I already mentioned that our subconscious can drive. But our subconscious cannot improve, as I said before. I’ll believe you when you show me a sleepwalker write and revise a paper.
Not really, If we accept that consciousness is a brain function and nothing else then of course we should not be surprised that it has limitations.
I’m incapable of imagining multiple dimensions but I can be given the mathematics that explain how it works. The concept of “red” in the mind of a blind person could also be defined in that way. If they have any mental imagery at all then it seems possible to get them to imagine light wavelength changes as corresponding to different mental image appearances.
I don’t think you’ve made that case.
I wouldn’t rule out the possibility that one day we may be able to. First understand how the brain processes red light visual stimuli for sighted people and then replicate this within the brain for the blind person.
Careful with that “usual way” though. Science is a method of rational enquiry and I think it is the only way of understanding how the world works and that goes for the brain as well. However, advancements in techniques and knowledge ensure that the methods used in that scientific enquiry can change and the “usual” techniques at the moment may need enhancement in order to unlock the secrets of the brain. But whatever those techniques are they will be employed using science in “the usual way”.
It isn’t an answer at all, it is merely a statement of our current knowledge.
Really? does your definition of “physical” differ from mine?
I don’t promote “vague handwaving” as a method for understanding the brain.
Well, let’s say that we’re conscious of feedback mechanisms in our minds. I am conscious of pain, and that in itself doesn’t seem to require any feedback.
Regardless, there evidently exists feedback without consciousness: take the thermostat. Hence, the presence of feedback mechanisms, such as learning and revising plans, are not sufficient to conclude the presence of consciousness. So, what is it that makes feedback conscious in one case, and not so in another?
I’m not sure I understand exactly what you’re asking. Am I aware of my subconscious thought processes? Certainly not, that’s what makes them subconscious. But what does that establish wrt the question whether behavioral analysis suffices to decide whether something is conscious, and thus, whether behavior determines consciousness, and hence, whether evolution, acting on the level of behavior, can ‘select for’ consciousness?
So is your claim that consciousness is necessary for (self-guided) learning? Is a neural net performing unsupervised learning necessarily conscious, then?
I don’t know what you mean by that, or how it applies to my argument.
You’re making my case for me: you’re incapable of imagining what it would be like to experience multiple dimensions, but you’re perfectly capable of knowing everything else about them. So it’s the experiential aspect that you invoke via imagination that remains inaccessible to you even given all of the objective facts about multiple dimensions—that is, this experiential aspect is fundamentally different from all the other aspects of multiple dimensionality. You can know, for instance, how light propagates, what the ratio of the surface area to the volume of a higher dimensional sphere is, and so on, without ever having come into contact with higher dimensions. However, there is one kind of knowledge, and one only, that requires you to directly come into contact with its object, and that’s experiential knowledge. Thus, this category is fundamentally set apart from every other.
But even this (which I think is misguided, like trying to build what something sounds like from colors) hinges on if they have any mental imagery at all. You thus share the intuition that without having an experience of mental imagery, knowledge about experience is forever inaccessible. This is unlike knowledge of anything else: I can know the sun without ever having seen it; I can know its size, its volume, its temperature, everything that makes the sun the sun, every objective fact about its existence. None of this will tell me what it’s like to feel its warmth on my skin: for that, I need to come into direct contact with this experience, I need to actually have that experience (or some sufficiently similar one, i. e. any experience of warmth will do).
Or, to take another approach: it’s easy, for anything, to come up with new exemplars once one has become acquainted with the general category. I know what characterizes the category of stars; I can invent stars that have never pierced the darkness of space. I can invent people that have never lived. Tell stories that have never happened.
But I can’t invent a color that I’ve never seen. I can’t imagine what it would be like, for instance, if my vision extended into the ultraviolet. I can’t imagine what it would be like to be a tetrachromat. Or, to take the canonical example, a bat, sensing the world by echolocation.
I think your trouble comes from failing to distinguish between an object and the experience of that object—hence, your appeal to imagining multiple dimensions, despite the fact that this is obviously an experiential act. Perhaps it helps if you consider what you can write down about something. You can write down every fact of the physical composition of the sun, enough for anybody to construct a new one, without being able to explain what its light looks like to anybody who’s never seen anything, or what its warmth feels like.
Well, you can quite easily dispel it: just find one other area of knowledge whose objects necessitate being in contact with them in order to grasp them; one bit of knowledge that can’t be communicated without being related to experience in one way or another. (Or, of course, find a way to communicate knowledge about experiences!)
It is what you proposed to ward off my probing regarding the need for answers to the questions posed by conscious experience—the brain is complex, and well, something something something.
Probably. For starters, I think it’s improper to consider the physical to be (just) the object of (the science of) physics; physics tells us things about structure, but the world doesn’t exhaust itself in terms of structure. There may be physical entities that are not properly subject to physical science. Galen Strawson has written insightfully on the subject. (There’s also a talk of his on YouTube, but I haven’t gotten around to watching that yet.)
You do gesture in the direction of ‘it’s complicated’ when it comes to the difficult questions rather than trying to face them head-on, though.
Hang on, I don’t say “it’s complicated” and leave it at that. The next step after admitting it is complicated is enquiry and science which very much is facing them head-on.
See, this is where I think we have a fundamental breakdown in communication.
I still do not accept that you have put forward any difficult questions that require anything beyond better understanding of the workings of the brain and an acceptance that natural selection was the means of developing consciousness. If consciousness helps an organism it is selected for, if subjective experiences helps it is selected for. The evolutionary benefits of those traits were probably greater than the ability to describe red to a blind man so it is no wonder that the latter does not come easily.
Though of course the human brain does throw up anomalies such as synesthesia, where words and sounds and smells can be interpreted as colours and vice-versa. In those circumstances you could give a blind man an experience of “red” by triggering it with the right verbal, aural or olfactory cue. Wierd certainly but just what you’d expect from the misfiring of a complex brain and certainly if that trait conferred a strong enough evolutionary benefit then you would not be able to use example of describing the experience of red to a blind man.
So I don’t see the subjective experiential nature of consciousness as a particular and distinct mystery. A creature benefits from being able to experience its surroundings and will do so to a greater or lesser extent. Consciousness and subjective experiences seem to me to be trivial examples of the “greater” extent.
See, this is all a technically difficult question and if it can be solved or understood (not a given) it will be through the scientific method and not through philosophy, I just don’t see what philosophical musings add to this. Too often it is very clever people playing word games.
Well, the thread of the conversation is basically, you claim that there’s no fundamental problem with consciousness, I present some arguments that there is, which you then dismiss because ‘it’s complicated’.
Sure, but again, selection only works on the level of behavior, and it’s completely unclear in what way behavior and experience are related. In particular, it seems possible to have experiences completely removed from one’s behavior; yet, that’s not what we’re seeing. This is a problem, and one that needs to be addressed if we’re ever going to make way on understanding consciousness.
How? What is it you can do with consciousness that you can’t do without?
This isn’t remotely as obvious as you make it out to be. Take the paper I linked above: Strawson proposes a coherent notion of how things might be, such that science can’t tell us anything about experience, yet consciousness is still a perfectly natural part of the physical world. Claiming that we know a priori what form the answer will take just blinds us to other options, and may lead to chasing rabbits down blind alleys.
What part of “all consciousness is feedback but not all feedback is consciousness” don’t you get?
Determining whether another entity is conscious is a known difficult problem. But it is irrelevant here, since we know we are conscious. I don’t know what you mean by behavior determining consciousness. We might be able to induce the existence of consciousness through behavior (and we might be wrong) but I doubt we became conscious because our ancestors acted in a different way.
A neural net “learns” through data. Maybe that is like learning through practice - which does not have to involve consciousness. But current neural nets don’t examine themselves and change learning strategies based on self-analysis. I’m not saying that it is impossible to do that, just that current learning methods don’t.
I get that perfectly well, which is why I asked you what determines whether feedback is conscious.
Also, ‘all consciousness is feedback’ is simply wrong: there’s no feedback in my being conscious about a persistent itch in my left foot. There’s just the itch, and me being conscious of it.
But how does, for want of a better image, evolution know we are conscious? If it’s ‘difficult’ to conclude whether a being is conscious, then how does evolution do the determination? Because if it can’t, then it can’t select for consciousness. So if you hold that consciousness is adaptive, you must hold that consciousness is determinable via behavior. Hence, my asking how, precisely, you can tell whether something is conscious merely by observing its behavior.
I don’t dismiss it in that way. I say that the workings of the brain are complicated and that we don’t know how consciousness works exactly, yet, but that it is a function arising from a complex brain is close to certain. There are plenty of unsolved problems in biology and consciousness is on the list along with all the others, not in a special section.
If you can’t see how subjective experience might drive behaviours which can be selected for, then I think you fundamentally don’t understand evolution.
in what way is this a problem? is it that you can’t say how it happens or why it might be selected for or escape selection?
Nothing, but to cover all the things that consciousness can do would involve creating a a sufficiently complex machine that would be, in fact, conscious.
And as natural part of the world if anything can uncover the secrets of consciousness and experience it will be science, you can bet the house on that.
The itch sensation is transmitted to your brain. That you know you have an itch stems from being able to observe that sensation. Do you think you never have an itch when asleep? Better, ever wake up hungry? Do you think you only became hungry when you awoke, or did you only realize you were hungry when you awoke?
C’mon now. Evolution doesn’t “know” anything. If consciousness affects behavior (which I hope you accept) and lets us refine our behavior to be more successful in terms of reproduction, that’s enough to have it be selected for.
Whether or not other people are actively conscious or not they act as if they are, and that is good enough for evolution.
They’d be closer.
One big problem with machine learning is that we don’t know why the system makes a decision. It would be nice if we could ask it. It might just be telling us a story (like we do when we are asked a similar question about why we did something) but it would a start. If I were a Turing test examiner it would be about the first thing I asked.
I’m late to the thread and still reading, but wanted to point something out if it hasn’t been already.
How does consciousness influence behavior?
We know in some cases how the physical states of the cells in the brain can influence or assist with behavior (e.g. pain receptor causing reaction/movement, or the grid of neurons helping with navigation), but where exactly is consciousness and how does it cause behavioral changes?
Some researchers are leaning more towards the “after the fact/tell a story” version, but even with that, there is still a question as to whether it’s providing value and if so, how.
Not directly related to your point but interesting:
Recent research of consciousness via mri showed that consciousness tends to have complex patterns of neural activity bouncing around with inactivity in other areas. Unconscious patterns tended to be much simpler.
Being able to consciously consider options and learn from past experience and put yourself into the position of other people may well help you construct optimal strategies for whatever situation you find yourself in.
But, to HMHW’s point, all of that can happen from a functional perspective. That’s not really answering “how” consciousness can influence behavior.
If consciousness is just a side effect of the electro-chemical states of the brain, then it isn’t influencing behavior, it’s just along for the ride.
If it’s not just along for the ride, how do our cells etc. incorporate consciousness into their resolution of next behavior? What is the mechanism that provides transfer of information from consciousness into the mechanical workings of the brain?
Could it happen from a purely functional perspective? yes. That consciousness does influence behaviour is a working hypothesis that seems to fit the facts best thus far.
you’ll have to explain that to me. What is it about being a side-effect that prevents it from influencing behaviour?
If you want an explanation of the exact mechanism you’ll find plenty of admissions that we don’t know yet. That’s fine.