Are there any good theories on self-awareness?

I’m sure there are, I’d just like to initiate a discussion, if not an actual debate if there’s one to be had on the nature of consciousness, self-awareness, sentience and sapience.

I’m not necessarily a materialist, but this one’s interesting to look at from a materialist view, in that the universe is made of matter and every phenomena we can behold is an inherent permutation of matter and energy that just happened to come into being (i.e. consciousness). From this perspective/philosophy, we know there is at least a very tiny fraction of the matter in this universe that is aware of itself. Us.

I’m aware right now that even being able to ask this question is pretty meta. What are some current (reasonable, but I’ll keep an open-mind) ideas or theories on how this aspect of the universe came to be; or if we can even hope to understand such a profound and complex manifestation of physics?
ETA: Whoops, meant to put into GD. Mods, please move there (or IMHO) if required necessary… thanks!

I’ve reported this post for a forum change, which you can do yourself by clicking the red triangle in the upper right hand corner of your post.

Moved to GD.

You can’t report your own posts; however, you can get a mod’s attention for the thread by reporting another post in the same thread.

Actually, you can report your own posts, IIRC it was changed to be possible some time ago.

There are lots of theories and little consensus about any of them. Philosophers have debated this question for centuries, if not millennia, and it is still hotly debated, at a high level of sophistication, amongst academic philosophers, scientific psychologists and neuroscientists. It is one of the most difficult and complex questions that modern science faces. (And, incidentally, opinions do not line up along disciplinary boundaries. Most contemporary philosophers are materialists; some neuroscientists and psychologists are not.)

Unlike most difficult and complex scientific questions, however, it is also a question that many amateurs with an, at most, superficial acquaintance with the voluminous literature (including quite a number of people who are professional scientists of some sort or another), think they can “solve” with some sort of brilliant insight that they imagine none of the thousands of people who have devoted years of their lives to the problem can possibly have thought of.

The upshot is a lot of fruitless debate on the internet.

If you actually want a GQ type answer, this might be a reasonable place to start looking (plenty of science there, as well as philosophy), or this.

If what you want is a “Great Debate,” have fun. I am sure there will be plenty of people willing to give their opinions and argue about them.

ETA: When i started composing this post, the thread was still in GQ.

Part of the problem here is that it’s very difficult to define a lot of the sorts of terms we use surrounding these topics that objectively hold up. For instance, with regard to intelligence, it used to be thought that if a computer could do certain tasks, like play chess, as well or better than a human, it was intelligent, but now we have computers that can consistently defeat the best chess players in the world, and intelligent is not how we’d describe them.

By the same token, what does it mean to be self-aware? By certain definitions, my computer or my car is self-aware. It seems like there’s some line that humans have crossed and, depending on what characteristics you use, perhaps a few other mammals are treading around, but that’s it. But even though dolphins are able to, say, recognize their own reflections, can we meaningfully say they are or are not? So, in the end, we’re stuck with something along the lines of “like us” or “we know it when we see it”. But how we do even judge those?

There’s always the Turing Test, that if we cannot linguistically differentiate a person from a computer, then we ought to assume they’re qualitatively identical, that they’re both conscious, but I’m not sure I buy that. We’ve got some systems that are able to model language at some level, but they’re still far from passing that test, and all they’re really doing is recognizing patterns, that certain patterns of language expect certain patterns in response, they don’t actually “understand” the language.
Anyway, there’s a few different ideas I’ve heard which seem to have some scientific support. One that I think probably seems to have the most behind it, though really isn’t something that we can reasonably test yet, is that consciousness is an emergent property of certain types of complex systems, of which the human brain is one. This sort of goes along with how animals with more complex brains seem to be closer to how we’d describe consciousness than others, and it also sort of explains how our own consciousness evolved, both as a species, and even over our own lives as the neural connections became more complex, our memories and personalities come into focus. This approach would also seem to imply that we could eventually create artificial consciousness if we could develop the right kind of complexity with hardware/software.

Unfortunately, I’m not really sure how we’d go about proving this sort of concept. We can attempt to create artificial consciousness, but unless we do, we won’t know if we haven’t created it because we’re going about it wrong because our modeling of the brain is wrong, or because the underlying theory is wrong.
Another theory I’ve recently heard, but it was covered at a high level, was that the brain is at least partially a quantum computer and that consciousness was somehow related to quantum non-locality. It was very fascinating and seemed to fit in a lot with my own ponderings on the topic, but without a lot of depth on the theory or with a lot of knowledge in quantum physics, I can’t say much about it’s validity except for what was said about it.

I first heard about it on this season’s premiere episode of Through the Wormhole, Is There Life After Death? wherein they were trying to figure out what consciousness. I sort of assume that because it’s a new episode, the information is unlikely to be outdated, but maybe someone else who knows more and saw it can comment on the topic.

Brains are organs that run simulations of reality to predict future events.
Animals that live in groups benefit from predicting the actions of other members of the group.
So the simulations of animals that live in groups should include simulations of other brains.
If the actions of other animals in your group are influenced by THEIR simulations of YOUR brain, you can get more accurate predictions by sometimes including a model of your own brain in your simulation.
When this happens, we call it self-awareness.

Anything by Jean-Claude Van Damme can be considered as pebbles on the road to awareness. Be aware. Be very aware.

I saw that episode of Through The Wormhole (at least it presents some of the front-runners on mainstream thinking, after Morgan makes it all dramatic for us). The Quantum one was a new one for me, and as interesting as I found it, I have this nagging feeling in my gut that the Quantum picture isn’t the whole story, although very convincing, and until we can formulate a better understanding from as yet undiscovered and integrated evidence/theories, the workings of the mind might be behind that veil until then. What a world.

Wow. Is there a particular school of thought, or learned person, associated with this idea? I’d love to learn more – your concise account makes sense to me (though it deals more with the “why” than with the “how”.)

I came up with it on my own. It was fallout from work that I’m doing on play theory. However, I doubt I’m the first person to think of it.

I completely agree with this. In addition, the idea of testing whether a computer can linguistically imitate a human being seems to me to suffer from another fatal flaw. People who propose that test always seem to assume that the test will last for only a short time. But if we wanted to establish that a computer’s abilities amounted to self-awareness equivalent to a human’s, we’d have to test the computer for an amount of interaction equivalent to a human lifetime. We’d have to see whether the computer appeared to be experiencing the same things that occur during a full human lifetime: emotional storms, self-doubt, gradually incorporating past events into a structure of adult wisdom, and so forth.

Indeed, that prediction (with feed backs and time based elements) as an element of self awareness has been a part of one of the most interesting theories now being considered.

-Jeff Hawkins, On Intelligence.

It is his memory-prediction framework theory of the brain

Jeff Hawkings has investigated this from both the biological and computer sides and most importantly, he is showing impressive results and there have been confirmations already for some of the things that he was predicting thanks to his theory.

I think you mean awareness of self-awareness. All living creatures have some level of self awareness. Most are easy to observe reacting to changes in their environment. its when you can think about your self awareness that the idea gets interesting.

Humans can do that and arguably some of the higher animals, perhaps dolphins whales and primates. Maybe a few more mammals too.

Don’t fall off the edge thinking that only your own thought is all that can be proven to exist. That is a weak philosophy :wink:

I’ve heard that there is an experiment going on that’s attempting to exactly recreate human brain with a giant network of computers (to be housed in 3 buildings?) to see whether this exactly mapped out human brain recreation would develop self-awareness/soul. It would be interesting to see if the soul is a sum result of/ something that comes along with a large network made up of a certain minimum number of simple brain functions/memories or it’s altogether something else, entirely something other.

The problem with such accounts is that they run into the so-called homunculus problem: in order to become aware of the simulation, it must in some way be represented to a ‘central observer’ in the mind; but if that’s how you become aware of things, how is this central observer aware of anything? Does he have again a central observer in his mind, and so on?

Why would you think that any of those are necessary consequences of self awareness? They are correlated with it in humans, certainly, but there’s no reason other than anthropomorphism to expect them to be present in non-human self aware beings.

Oh, and there was a recent thread that went to some length dealing with the question of whether or no computer self awareness is possible that may be of interest.

What makes you think that there is a central observer in the theory?

-Jeff Hawkins. On Intelligence.

If there’s nothing that observes the simulation, what purpose does it have? ‘Simulation’ implies projecting different possible scenarios, presumably with the benefit of picking one out as more advantageous than the others, which suggests something ‘looking at’ the simulation to make that choice. True, you can accomplish the same thing without anything looking at it, by simply attaching some sort of weighting function to quantify over possible outcomes – but then, this would not be a conscious process, but merely a mechanical decision engine.

It’s a very hard problem to get rid of, and I think most current models fall victim to it in one way or another – it’s also implicit in the quote you provide, where things look some way ‘to the cortex’, or where the brain ‘knows about’ certain things. Of course, the author may use this as a shorthand, which is only clear from a context that’s lacking.

In general, I don’t think that any model which describes conscious process as things being represented somehow to something, in which there is a distinction between observer and observed, can ultimately work.

It drives motor impulses. Since the world around us unfolds in predictable ways its a very useful survival strategy to perform actions based on what’s likely to happen in the future instead of what has happened in the past.

It’s difficult to grasp this aspect of brain function because we’re highly social animals and we rarely “turn off” the simulation of our own brain state. But if we’re highly focus on a task we can sometimes find ourselves slipping into “automatic pilot”. We do things without thinking about thinking, and are suddenly surprised to discover how much time is passed when we finally “come back to ourselves”.

Read Douglas Hofstatder’s Godel, Escher, Bach: The Eternal Golden Braid or I am a Strange Loop for a deep, engaging and entertaining look at this topic.

Short answer: Feedback.