If that’s the case your definition of consciousness is overly complicated in my opinion. The most complicated route from my house to the grocery store and the simplest have the same outcome. That I can make decisions to select a particular route is a sign of consciousness, how many choices there are doesn’t matter.
Well, the OP topic is “Do we even know what consciousness is at all?” and my answer would be “no, not really at all”. We can’t really seem to define what it might be, let alone figure out what creatures posses it. But I think “awareness of one’s own existence” is as good a definition as any. The Mirror Test, which is supposed to be a way to see which animals are self-aware, is only passed by very few animals, such as great apes, dolphins, orcas and a couple others (though the test is far from conclusive; the Wiki article I linked to says “agreement has been reached that animals can be self-aware in ways not measured by the mirror test”).
I’m also positing that true consciousness is something beyond mere self-awareness, in the sense of just being aware that one is an individual creature separate from other creatures; it also involves reflection upon, and questioning of, one’s own existence-- the big questions like “why are we here?” And I think it takes an ability to think abstractly to do so, which requires sophisticated language abilities. Yes, many animals are able to communicate, but not abstractly. Even Koko, the famous gorilla who knew sign language, was very clever, creating word combos like “finger bracelet” to describe a ring all on her own; yet one thing I read that struck me was that she never, or very rarely, asked questions. At least not any existential questions. She probably asked “when’s dinner?” and stuff like that.
Not to hijack a thread that was created in order not to hijack another thread, but I didn’t quite get that argument, or claim, that some people do not have an internal monologue. Doesn’t everybody?!? If a light is burned out I think “got to hit the hardware store for light bulbs. Anything else I need there? Do I still have that 20% off coupon?”
The simplest definition of anything is probably ‘it is what it is’, but it’s not a very useful one. I think the question people want to explore here is not what is the simplest, least-verbose term for describing the thing, but rather, what IS the thing it is doing, and how is it doing the thing it does?
Some people don’t have that. They have thoughts and can sometimes describe what it is like to have those thoughts, but insofar as they do, they describe a thought-life that is not composed of words.
To answer your question, nobody understands consciousness. Yet. It’s an open field of scientific investigation. Those who claim otherwise need to bring citations. (Those who offer hypotheses have more latitude.)
Can it be studied? Can it even be defined? The philosopher Serle answered yes and yes. He proposed the definition I gave in post 3 (and am replicating from decades old memory, sorry). Here it is again:
Ok, then are spiders conscious? Serle wasn’t sure. Are dogs conscious? Serle answered yes. To those who doubted him he replied, “You don’t know my dog.”
Dreaming is a neurological process that we might speculate involves the systems that visualize the world. So I hypothesize that it could be an indicator of which species are conscious and which are not. Or not: I understand some fish don’t really sleep but rather enter into a kind of stupor. But I might be wrong about that.
I’m not sure why: it seems like a coherent POV, backed by research. It suggests to me that consciousness is a farily primitive processing system, one conceivably conducted by ants, though not single cell organisms that would only require stimulus-response systems. Interestingly, reproduction does not require consciousness (bacteria do it) so I’m unclear about what the Von Neumann paradox is.
Except we haven’t established what intelligence is, much less that computers have it.
Even so called artificial intelligence might just be fake intelligence.
But what does it mean to be aware of thinking but not thinking?
I can agree the self is not the act of thinking, but the thinker. But that doesn’t really explain anything.
I once had a philosophy teacher who held that Descartes was wrong. He held that there is thought, but that there is no “I”, that it is just an illusion.
My feeling is that even if it is an illusion, it is still something. And the followup question is then how is it an illusion, and what would a non-illusion self look like differently?
Saying it is an illusion doesn’t explain it away.
So at what point did humans become conscious? Were Neanderthals conscious? Australopithecus?
Where in our evolutionary path did we move from blind animals to reflecting beings that experience existence?
Animals experience stimuli. They react to conditions, the feel, they remember, and they think and plan. What we don’t yet think they do is reflect in what they know and what it means.
Some here think the act of experiencing is consciousness
Others say it is only by abstract reasoning, contemplation of our experiences, that are consciousness.
Certainly the operation of perception includes illusions, but to argue the experience itself is an illusion makes no sense. I have to be me to be deceived by it.
Good question, assuming one buys the theory I’m proposing, which I’m not even sure I do.
I did mention the theory of the Bicameral Mind a few posts back, which was popular in its day (the 1970s) but has since gone out of favor. I do not buy this theory; I think the timeline is far, far too recent, almost ridiculously so, but I think it does make for a good thought experiment of what consciousness may be and when did it first arise in humans (if indeed it did first arise in humans).
The von Neumann construction is simply a means to overcome a particular problem that crops up in both the theory of self-reproducing systems and the representational theory of mental content, namely a certain kind of regress. The doctrine of preformationism held that every self-reproducing organism contains a miniature version of itself within itself, which then grows into a full copy; but of course, this can only work if the miniature version has another miniature within itself, which in turn contains another one, and so on (at least if you want for reproduction to potentially continue indefinitely). Von Neumann’s construction overcomes this issue, by separating the process of reproduction into a semantic and a syntactic part—interpreting a ‘tape’, a plan, in order to build a copy, then treating that plan as a set of arbitrary symbols to be copied.
In the representational theory of mind, the same regress occurs in the form of the homunculus problem. If your sense data are somehow represented on an internal ‘screen’, this representation has to be perceived; but the way anything is perceived is by projecting it onto an internal screen, so whatever does the perceiving (the homunculus) needs its own inner screen, combined with its own inner watcher, and so on. (More generally, we meet a homunculus problem whenever we have to appeal to a capacity to explain itself.) Here, too, the von Neumann-construction can shortcut the regress, by enabling the screen to ‘watch itself’—that is, creating a representation that has a representation of itself (the ‘tape’) available to itself. So reproduction doesn’t entail consciousness, but the same sort of construction that enables reproduction also enables a kind of self-awareness, because the problems that need to be solved are close structural analogues.
Yeah. The problem might be that we’re trying to understand our brains with our brains. Which does work for figuring out a lot of things about how our brains work; but maybe can’t work for figuring out all things about how our brains work.
However, due to the way our brains work, we’re unlikely to stop trying.
Humans are extremely visually oriented. Lots of other creatures are less so. Maybe they recognize others, and might recognize themselves, by scent – but we don’t have a scent equivalent of a mirror test. Maybe the lack of scent of a mirror image means to them “no real creature there, no need to pay any attention to it.”
Or, to some extent, each other. Which I think was what @Mangetout meant by that quote – though it’s possible I just didn’t understand how they were thinking.
I think we can all agree that there are some topics in philosophy which are largely naval-gazing; either with a simple answer or that are just incoherent.
Consciousness is not one of them though, and I would urge you to consider the possibility that you are the one who is missing something.
Awareness is certainly part of consciousness, and colloquially we might sometimes talk about consciousness as only that, but within neuroscience, philosophy of mind or psychology it is a much more broad topic. And I would argue that the more interesting, and vexing aspects of consciousness, have little relationship to awareness.
The hard problem of consciousness is the critical problem IMO, and looks basically intractable. I also think the problem of personal identity is a key issue, which unfortunately often gets misstated as a “ship of thesus” problem.
When we talk about understanding, the critical thing is always where the rubber meets the road: what predictions and inferences can we make?
Lets say we want to ask some questions about pain. I have a robot here that behaves as though it experiences physical pain. But how do I know if it is experiencing pain; what do I need to look at in its neural net to tell the difference between a complex mapping between sensation and behaviour and the actual negative experience of being in pain? How much pain is it in; how can I compare it to your sensation of pain? Can I make it feel more pain than a human is capable of feeling? What decides what the limit of pain is?
Right now, we don’t know how to answer questions like I just listed. And it’s a practical thing too: doctors would love to have concrete answers to such questions (with “robot” replaces with “any human”).
None of this is to say that consciousness is some magical ghostly thing. It definitely appears to be 100% neurochemical. We just don’t have a model right now for how a configuration of neurons can have experiences like pain.
The biggest hurdle is defining just what you mean by “consciousness”. Too many would-be philosophers skip over that step, which makes it inevitable that they’ll be talking past each other.
Once you have that definition, it can be anywhere from trivial to impossible to know what it is, or to recognize its presence, or whatever, depending on just how you define it.
Certainly. The trick is just getting more than two people to agree on just what those topics are, and then, what the simple answers should look like. In the end, that’s where much of the work of philosophy happens: all of us know certain things to be obviously and self-evidently true that just ain’t so.
The word ‘consciousness’ simply doesn’t always refer to the same thing in ordinary language (as many words do), and any of the different meanings may be subject to meaningful philosophical inquiry. So that there is a certain amount of parallel discussion is probably inevitable.
However, from my point of view, I think that the two most philosophically salient issues are those of intentionality and of the qualitative (or phenomenal) nature of certain mental states. Intentionality here doesn’t have anything to do with intent, but with being about or directed at something beyond itself—such as how your thought of an apple is directed at, but distinct from, the actual apple on your plate before you. The qualitative aspect then is the presence of the apple’s qualities, like its redness, or the sweetness and crunch of the first bite you take, in your experience.
Yes. Although the quote is originally about non-human alien minds, I think it probably applies to human alien minds. They walk among us. We are them.
When you get right down to it, people are not describing their own internal experiences of existence in the same way as each other. Not because they lack the terms to describe it, but because they are describing different things.
The brains of humans are all broadly similar in structure and low-level operations; neurons gotta neuron, but the emergent property of being a conscious mind is apparently not the same experience for everyone.
A significant dividing line in developing intelligence is abstraction. It seems unlikely that Australopithecus would have had the ability to understand a concept like “Thursday,” because that requires the kind of linguistic capacity that supports establishing a structure in which “Thursday” makes sense and is a useful thing to understand. TBMK, there is no evidence, either physiological or archaeological, that Australopithecus had the necessary language capabilities to develop abstract thinking, and much the same is the case for Neanderthals.
Yet, the ability to abstract is not the same thing as consciousness, as far as I can tell. My cat appeared to have indications of what I would consider consciousness but had limited abstraction capabilities – she could recognize when something appeared out of its usual place, but could not necessarily connect a “why” or “how” to that.
One thing that sticks in my mind is what I would describe as a sort of voyeuristic empathy such as when one is guiding a child through a learning experience and finds oneself seeing through the child’s eyes, which I suspect some of us have experienced. The ability to form an empathetic connection with another being is, IMO, a strong indication that that being has self-awareness – we can understand the why of the behavior of machine intelligence, but I do not think we can realistically empathise with it (this may be akin to a sort of ethnocentrism, but I think it is still fitting).
I believe you misunderestimate the Neanderthals, and probably the Australopithecines too. From here I quote, with lots of links:
Evidence that Neanderthals could think symbolically, create art, and plan a project like this one has been piling up for the last few years. Neanderthals in Spain painted the walls of caves and made shell jewelry painted with ocher pigment around 64,000 years ago. About 50,000 years ago, Neanderthals in France spun plant fibers into thread. And in central Italy, between 55,000 and 40,000 years ago, Neanderthals used birch tar to hold their hafted stone tools in place, which required a lot of planning and complex preparation. And archaeologists have found several pieces of bone and rock from the Middle Paleolithic—the time when Neanderthals had most of Europe to themselves—carved with geometric patterns like cross-hatches, zigzags, parallel lines, and circles.
I am also reasonably sure that they knew the moon cycles, so perhaps they had no concept of Tuesday, but they had a very clear idea of full moon, new moon, last quarter and first quarter.