Do we even know what consciousness is at all?

That’s a hardware vs. software thing. Their software is different, but the underlying hardware is pretty much the same. A neuron is a neuron regardless of species. There’s enough similarities that I’ll extend them the benefit of the doubt. Even if they have a different kind of consciousness than we do, it’s still a consciousness of some kind. I do agree with what you say about computers. They lack consciousness of any kind.

I like that phrasing of “internal experience of existence”. What is the cutoff point? My best guess, and the reason I mention those particular organisms, is that it comes at the point where an animal has a brain as opposed to a scattering of neurons without a central nervous system. Do I know for sure that something like a fly, earthworm, or spider, has an “internal experience of existence”? No, I don’t, but I’m willing to give them the benefit of the doubt. Maybe future advances in neuroscience will give us a better answer, maybe not. But the line has to be drawn somewhere, and that’s where I’m personally most comfortable drawing it.

Admittedly, that comes from my approaching it from a “would it be wrong to purposely torture this thing” vs. “does it even make sense to speak of torturing this thing” perspective. Insects, worms, etc. seem to me like they fall into the “it’s wrong to torture them” category, while something like a mushroom, bacteria, plant, and yes, a sponge fall into the category of “does it even make sense to speak of torturing this thing”.

Yeah, I don’t think we’re all that different in our viewpoints. I just draw a distinction between ‘feeling’ and ‘thinking’ when it comes to consciousness. Referencing back to the internal monologue thread, thinking, at least abstract thought, requires language; without an internal monologue I don’t believe true consciousness can be achieved. We wouldn’t have the ability to wonder what or who we are, and why we are here; all the big 'W’s.

But then again, as I mentioned in my first post in this thread, I’m not really sure if we are even ‘conscious’ or if we are just barely intelligent enough to con ourselves into believing that we are. Can’t spell ‘consciousness’ without ‘con’ :crazy_face:

" His was a great sin who first invented consciousness . Let us lose it for a few hours."

  • F. Scott FItzgerald in “The Diamond As Big As The Ritz”

Some interesting but unproven and likely totally “woo” theories I find interesting are: transgenerational epigenetic inheritance, morphic resonance and the psilocybin ingestion leading to a cognitive revolution (Terence McKenna).

Of course humans are conscious, because we define consciousness by what we experience.

Explaining what that experience actually is is a different matter.

Calling it “awareness” doesn’t really make it more clear. What is awareness? How are we aware? What is the “I” that is aware?

Saying it is an emergent property of intelligence doesn’t help. You now have to define intelligence, and then what intelligence means without a thinker, an awareness. “Emergent property” isn’t any more explanatory than “wet” is an emergent property of water. Saying “wet” doesn’t explain unless you’ve felt it. What makes “wet” wet?

I can say thought is a complex chemical reaction. That doesn’t make it any more clear. It’s easier to explain fire.

I could say consciousness is what the brain does. Still not an explanation.

I think awareness being an emergent property of intelligence is one thing we can rule out. At the very least, computers, having intelligence but lacking in awareness, show that having intelligence isn’t a sufficient condition for awareness.

I’ve commented on this question earlier on a different thread, but consciousness is that which is aware. It does not think, it does not feel, but rather is that which is aware of the thinking and feeling. That inner voice that responds “36” to the question ‘what is six times six’ is not me and has nothing to do with consciousness. Descartes got it backwards, I am that which is aware of thinking, therefore I am.

Fewer really. Looks like the Spain understands we don’t need extra words to describe the same phenomena.

I know what awareness is. It’s response to stimulus. I won’t quibble over simple animals that have no more basis to a stimulus response to a spinning top. Any animal or machine which involves some processing to determine a response to stimulus is aware. My computer is aware, it knows what it’s temperature is, how much memory and mass storage is in use, it knows what time it is. And now some software makes it intelligent, which is another simple word people like to associate with magical properties.

I disagree. That definition, taken to its extreme, leads to the conclusion that everything is aware. My favorite example of this is the question “what is the best computer to calculate the flight of a baseball?” It isn’t a human, or a computer that uses transistors on a silicon chip. It’s the baseball itself. It responds to the stimuli of being hit by a bat, the wind, gravity, etc., and will demonstrate exactly how it will fly by how it flies. WRT a temperature sensor, a kettle of water set on a heat source is similarly unaware of what the temperature is. The fact that it whistles when the water starts to boil isn’t evidence that the kettle or the water is aware. Everything else in the universe responds to everything else as well. That doesn’t mean that everything is aware. What’s missing is the qualia. A computer, a baseball, and a kettle of boiling water don’t experience qualia. Barring a belief in solipsism, it seems safe to assume that things with brains, and only those things as far as we know, do experience qualia.

One is tempted to paraphrase Feynman’ remark about quantum mechanics.

If you think you know what consciousness is…

That’s a spinning top. It has fixed responses to stimulus.

This view that animals don’t have an inner thought-life is, traditionally, partly a religious one, where humans are the only things with the divine spark and animals are just clever automata made of delicious, or sometimes forbidden, meat.

And I wouldn’t counter-argue that animals necessarily have the complexity of thought-life that we enjoy, but it is there, to some extent. I’ve seen it myself a few times - for example, my little dog loves to pick up stones on the beach, but doesn’t like to get sand in her mouth; time and again, I’ve seen her dig carefully around and under a stone, to try to excavate the beach sand from beneath it. It’s very clear from repeated observation of this behaviour that she hopes the rock will remain floating in the air, after all the sand is removed from under it and that it will thus be possible to pluck it out of the air without ingesting sand.
That might seem far-fetched, but the methodical and careful way in which she attempts to carve away the sand from under the stone, is difficult to explain in any other way, when you’ve watched it the number of times, and in the detail that I have.

So she is in some way able to model the world inside her brain and make decisions and take actions based on how she hopes that model will behave. Her model doesn’t account very well for gravity and the structural properties of sand, and she seems to be stuck at that level of understanding, but there is definitely a plan - an observation, an intent, and a carefully-executed series of planned actions going on.

Just to be clear, my argument about animals and their inner thought-life, or lack thereof, was not coming from any religious angle at all. And yes, many animals, including your dog I’m sure, are very clever and good at figuring things out and modeling the world to some extent.

But I was positing that what we think of as ‘consciousness’, which you described as ‘internal experience of existence’, and I agreed with, is not something that most animals are capable of. I think self-reflection, the big questions of "what are we doing here?’ or “why do we do what we do?” are not possible without the ability to think abstractly, and abstract thought is not possible without language and an inner monologue.

So does everything else, except maybe the decay of unstable fundamental particles. It’s just that some of those things, like a human making a decision, involve too many different parts to be able to accurately model.

Sure. And that’s the difference between a baseball, a spinning top, and animals with a neurological system that develop complicated responses. Or some fancy machines like my laptop. Looking at the extremes doesn’t help understand the complexity of consciousness but it does prove it is a phenomena that emerges from basic principles.

I agree with that. The issue is that we still haven’t figured out the details of what those principles are.

IMHO, based on my life experience as well as my education, having a functioning brain is sufficient, but not necessary. But it’s also my opinion that computers haven’t yet reached the point that what they use to make their calculations is enough to make them conscious. Not that they will never get there, just that they haven’t yet.

This thread reminds me of the following quote:

“If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.” - Emerson M. Pugh

With that in mind (pardon the pun), I am not even sure it’s possbile for a person’s brain to define or describe their brain’s “consciousness.”

To put it into a sloganized form, consciousness is self-representational access to intrinsic properties. (Note: while this is peer-reviewed research, so far, I regrettably haven’t exactly convinced many of this.)

This leaves two questions: how does ‘self-representational access’ work, and what are intrinsic properties? Let’s tackle the second one first. Physics, as in the science, describes quantities in terms of relations—length, for instance, being the relation of a certain thing to a reference, a meter stick. This leaves things rather underspecified, because we don’t know what, in the end, it is that stands in those relations—what the relata are. If you’re not an ontic structural realist, you’ll have to admit some form of ‘intrinsic properties’ that physics is silent about, but that ultimately ground all physical notions.

A panpsychist then would say, well, those intrinsic properties are just in and of themselves experiential, so case closed, mystery solved. That’s always seemed cheap to me: it’s answering the question of how the rabbit got into the hat by stipulating that it’s just always been in there. So I needed a different way to account for the reality of experience.

My proposal for that is what I call the ‘von Neumann mind’: it’s based on the notion of von Neumann’s universal constructor, which is a device that has access to a description of itself in order to enable it to make modified copies of itself. Von Neumann noticed that there’s a certain paradox in the notion of self-replication: if you have a plan of yourself from which to construct a copy, that copy won’t be able to do the same thing, if the plan is a part of it—otherwise, the plan would have to include the plan, which would have to include the plan, and so on. His solution was a bipartite architecture, that both interprets and copies the plan, to end up with a bona fide copy in the end.

Now, representational accounts of mental content—on which thoughts are, in some sense, composed of symbolic entities that refer to things beyond themselves, out there in the world—face a similar dilemma, the so-called homunculus fallacy. If there are mental representations, then there ought to be a consumer of these representations—but how does it perform this ‘consumption’? If it needs internal representation of its own, we’re off into a vicious regress. So it turns out that a similar bipartite architecture works to overcome this regress, leading to an ‘active symbol’ that has access to a formal specification of itself.

But this, in the end, doesn’t quite suffice. Formal theories again only capture structure. This leads to issues of undecidability, in the sense of the undecidability of the halting problem (more accurately, Löb’s theorem). To overcome this, the von Neumann construction needs access to what’s effectively a model of itself—it needs the relata, not just the relations. This is what the intrinsic properties provide.

As a result, the sort of knowledge that is acquired in this way has exactly those properties that are so flummoxing about subjective experience: it can’t be shared, it seems just intrinsically known (the way you just know you have a headache, simply by virtue of having a headache), it is inaccessible to further analysis—just as the proverbial inscrutable what-it’s-like-ness of the redness of red.

So the self-representational access makes mental states mean something to themselves, and the intrinsic properties provides them with a concreteness that mere structure alone lacks. (Absence of this, essentially, is why computation alone is insufficient for giving rise to mental properties, although artificial minds are in principle possible.)

Whilst I think I agree with most of this, I don’t see why we would decide that animals don’t have language (or at least, communication and expression) - and the inner monologue thing is debateable, since some humans apparently lack that, but can still think.

I don’t believe my dog is thinking in English inside her head, but there are thought-processes going on there that consist of ‘what I want’ and ‘what I think I need to do to get that’, that are not just automatic little processes - they are a form of cognition and awareness.