Consciousness

And just so with chimpanzees and sign language. They can memorize the rule book and the signs, but they don’t understand the abstract ideas behind the signs. Hence no language and no intellect, something only humans (so far) have been blessed with.

(Shameless plug)
Dr. Mortimer J. Adler has some very valid philosophical ideas on the nature of human conciousness, including what our ideas really are, why they cannot be apprehended directly by our own concious, and the difference between the human brain and those of the lower animals in his two books Mind Over Matter and The Difference In Man And The Difference It Makes. I highly recommend these to anyone interested in the philosophical and scientific aspects of conciousness and the marvelous human brain.

Thanks, Cecil, for a great column.

I would like to think that a good criterion for “thinking” is the ability to come up with an original thought. This is, by definition, impossible for the Chinese black-box example.

Have you ever worked on a difficult problem, and then got a flash of inspiration, and could not explain where that inspiration came from? I’d like to think that this is what separates people from computers.

I have a question for Cecil, or for whoever wants to answer it: wasn’t the original question in Cecil’s column "What is conciousness?
Don’t you think the subject is NOT “humans vs. computers”?
I mean, this IS an interesting subject, but is not what Mr. Jeremy Fields asked.

J. Fields asked, in part, “If consciousness (which we all experience intimately) is merely an epiphenomenon of the mind, which is an epiphenomenon of the brain, then there must be a physical mechanism in the brain that accounts for it.” Cecil could not respond in his mere 600 words, but I’d like a crack at that small part of the larger question.

This is an example of the phrasing of a question limiting the possible answers, which can in many cases make questions unanswerable, can lead apparently to paradoxes, can block roads to truth or greater understanding, etc. This sort of thing often happens in political discussion, religious discussion, etc. (Not that I mean to criticize participants of those pursuits or the questioner in this instance - not at all - but it might be to one’s advantage to be aware that accepting a questioner’s phrasing can set the discussion within an implied or explicit world view which then interferes with fruitful discussion or even leads irrefutably to a predetermined answer. On the other hand, arguing semantics can be just as damaging to fruitful discussion.)

But to the topic at hand: no, there NEED NOT be a “physical mechanism in the brain” that accounts for consciousness, but neither is there a need to resort to explanations such as “soul,” “spirit,” or other higher forms of existence. (Of course, a lack of “need” is not proof on either side of the argument; it just leaves the question open.)

The question as stated does not make room for the existence of “emergent properties” of a system. Emergent properties are behaviors, states, functions, etc. of a system that are possible only when the system is functioning, but which cannot be detected (do not exist?) in the assemblage of parts.

To give an example which I hope is clear, think of an automobile. Automobiles, when functioning properly, can exhibit the emergent property of “locomotion.” (They can drive from place to place.) Leaving out issues of the driver, think simply of a disassembled auto. If I showed you a garage full of working parts sufficient to make up an automobile, but which were not assembled into a functioning system, and asked you to identify which part or parts contained the locomotion, you would recognize that this was a moot question. You can also see that assembling the parts properly can allow the various emergent properties to, well…emerge…again.

Note that this is also a possible answer (to my mind, a good answer) to the question, “What is life?” When the requisite quantities of materials (carbon, oxygen, hydrogen, etc.) are assembled in a sufficiently complex arrangement (proteins, carbohydrates, genes, etc.), they exhibit the ability to metabolize, to interact (physically, chemically, sensually, bahaviorally) with other components of the larger environment, etc. Arrange them one way you get a platypus, arrange them another way you get a tree. A few small breakdowns and the cell, or the organ, or the individual, loses that which we call “life” - but that doesn’t mean that any Thing has left. It just means that the system doesn’t exhibit that “life property” any longer.

It could be the same with consciousness. Awareness or mind could be an emergent property of a functioning, sufficiently complex (recursed, or fedback) brain. Less complex brains might exhibit a lesser level of consciousness - and if we accept that we get to stop wondering whether dogs have souls.

So a last question (or two): what level of interaction and “group memory” and feedback and etc. might be necessary to raise a community or a society from what appears now to be essentially unconscious (self) control to some glimmerings of a larger awareness or “mind?” And where can I buy stock in the Internet company looking at mapping that road?

df

It was a wonderful discussion of the difference between humans and computers, but unfortunately Cecil missed the spirit of the question.

It’s not about thinking or computing, it’s about awareness. That’s the thing that separates us from the machines and possibly my flies that constantly beat themselves against a window in the expectation that it will suddenly turn from a solid into a gas.

If I remember my high school biology correctly, a frog operates almost entirely out of its spine and its brain performs few functions. It is a creature that reacts to things, based on a pre-programmed response. Does the frog have awareness? We know it can react, but is it conscious?

So Cecil, how about taking another run at it and this time define it from an awareness point of view rather than simple reactions.

E1skeptic: I suspect Cecil was on vacation and he had his computer answer the question. The computer saw the words consciousness, brain, and mechanism and immediately produced the lengthy answer.

Sven and Ole’s pizza, Grand Marais, MN

Jimmy G. has it right. Cecil nodded, as all people must occasionally, and evaded the question.

Awareness of self–self consciousness–has run up a lot of printing bills without a compelling explanation. This generation’s version of Aquinas had no answer, either, and we are left distraught.

My. my. You can not see any spiritual aspect of conciousness? You think your conciousness is just electricty running through circuts in
your brain with some chemical/nuerotransmitter aspect thrown in
for emoitional instability? I think the VOLUME of near-death experiences, intuition, and peoples ability to
do extrordinary mental things shows us the human brain is more than just a damn computer. Why are our brains able to waste time with such abstract concepts as this? I se a lot more than just a computer to sustain the body. See you in hell.

All this speculation and evasiveness shows our total ignorance on the subject.

Alas, how can we show the existence of consciousness objectively?


¾È ³ç, ÁÖ µ¿ À

I had lost track of Doug Lenat in recent years, but one thing I was sure of was that that his original 10 year deadline wouldn’t be met. What I didn’t expect was that he would be able to get money to keep going. I like the fact that he’s managed to make the deadline recede so much faster than at clock speed. Someone less full of it would probably be a constant 10 years from getting finished.

I wonder if he is still upset at having his reputation among people that aren’t dumb enough to fund “AI research” be defined by the entry for microLenat in “The Hacker’s Dictionary”. I’ve always felt that this was too lenient, and that the real term should be the femtoLenat, which is the amount of bogosity that is instantly fatal to the reputation of anyone not in AI…

Nickrz writes:

I was under the impression that we have scientifically shown that apes ARE able to represent abstract ideas using sign language. The three most famous of these apes are Koko (a gorilla), Chantek (an orangutan), and Washoe (a chimpanzee). All of which have demonstrated a rich sign language vocabulary and have used this vocabulary to communicate reasonably complex thoughts. If you’re assertion is correct, these apes are, at least, smart enough to fool hundreds of linguistic experts… Maybe you’re thinking of birds…

Have Cecil ask Al Gore - he invented consciouness, didn’t he?

Keeves wrote:

This criteria would rule out more than 90% of humans I’ve met… [grin]

richard younkin wrote:

Actually, quantum mechanics probably plays a more important role in consciousness than do the electrical impulses. For a couple of great books on the subjet of consciouness (and other ‘neat stuff’) check out Roger Penrose’s “The Emperor’s New Mind” and “Shadows of the Mind”. Very readable!

Still trying to fathom why Cecil answered the “what is consciousness?” question with the “why computers aren’t conscious” answer???

He must be suffering from non sequitous avoidicus…

jwg: GOOD one! (Now, can anyone prove Al ever took advantage of his invention?)

JoeyB: Take a look at the portion of Cecil’s article I quoted. If you agree with the logic therein, apply it to non-human great apes. Chimps can learn symbols that have a direct cause and effect relationship, that is, they can learn to associate a sign for food with the action of being fed, but no animal other than human has ever been able to communicate indirect (abstract) concepts using symbols they learned directly. For instance, a chimp might learn the sign for food, and can be taught that displaying that sign in certain situations will result in their being fed, but that does not mean they understand the abstract concept of hunger, and it certainly does not mean they will ever extrapolate that sign into another abstract statement “I hunger for knowledge” because such concepts cannot be taught them using a direct cause-effect relationship. “Food” as such is not a sign in front of a concept - it’s the “concept” itself. Humans don’t have to work that way because we use can symbols as abstracts. Such is the essence of a true language and intellect - computers don’t have this ability, and niether does any other animal save man. “Reasonably complex thoughts” do not issue from the flashcards or fingers of gorillas.
If you don’t agree with the quote, then I guess this argument is moot.

As for Cecil’s avoiding the question, hey give him a break. I’m sure he has a 6,000-word treatise he can send you that will explain everything. :slight_smile:

The discussion seems to be wandering from the original question, which if I remember correctly is: what is Consciousness? Manipulation of abstractions is not prerequisite to Awareness (or more precisely, to recognition of Self and one’s own mental, as well as physical, existence) and near death experiences and intuition are red herrings. Quantum mechanics (like “Society” in the opposite direction) are probably involved, but at a level of analysis too far removed to concern us here (that is, the biochemistry is much more immediate).

Development and maintenance of Self-Awareness, or Consciousness, is one of the major things which separates simple minds from complex minds. It is explainable in terms of simple mental processes layered recursively on top of each other until they become complex and the mental fiction “This pattern is remembered as ‘I’ because it is always here and can be traced over time; all else is not” develops on top of the unavoidable physical truth, “This is the body I feel and control, and that’s not.” And yes, that’s all that’s needed to explain it (though I point out that Occam’s Razor does not generate proof positive).

For those with the fortitude, an explanation follows. I may not make myself clear, but I hope that it’ll at least give an idea. It’s not rocket science, and it doesn’t require magic or Goddess to explain.

I earlier proposed that “Consciousness” or “Awareness” is not a Thing but rather a state, like “aliveness” or “happiness.” It is a name given to some of the emergent properties of a properly-functioning, sufficiently-complex Information Processing System or “Mind” at work. Historically these mind states and functions have been associated with a Central Nervous System (biological brain and spinal chord, and possibly more), but there is no reason to think they will always be so limited (Cecil’s point). It could also be said that we recognize certain states and the functions/processes/mechanisms that produce them as “consciousness,” but deny such categorization to other states et al.

So: what sort of mechanisms might be important to Consciousness? At a building-block level I would say Memory, Sensory (or other) Inputs, some form of Decision-Making or Comparative process.

Okay, great, so our immune systems are conscious? They demonstrate all of those abilities.

Well, obviously our immune systems are not Aware within the meaning of the English word, so a definition of “Consciousness” requires some higher order or orders of function and complex process. Examples incorporating the basic building blocks could be Prediction and Recursion. (Prediction is the comparison of (i) changing data over time and/or (ii) new sets of data against learned data in order to predict an anticipated state. Examples might be (i) predicting the flight path and landing zone of a baseball or (ii) recognizing that this man one has never met before is about to lose his temper. Recursion would be those processes by which a Mind checks itself and its “outputs” (decisions, behaviors, learnings, predictions) against not only new data and predictions but against desired states, etc. If you will, it is basic to the process of the “Mental Editor.”)

The level and complexity of functioning are critical to our characterization of any mind and its outputs, and to whether we would call that mind “Conscious.” A basic mind (say, in those flys mentioned earlier) takes Inputs and generates, or incorporates them in, a transitory State which then (usually) engenders an Action (such as flying towards the light/green/warmth). (No “intention” necessarily involved.) When the actions generated by a simple mind are prevented (the fly hits the window), that mind doesn’t have the complexity to even remember that it just tried that behavior, let alone to learn to recognize that there is such a thing as “glass.” Presumably, the Memory or Comparative powers necessary are lacking, and it just takes its new set of sensory inputs and tries again.

And below even this level we could argue pretty successfully that plants don’t have Minds at all, for despite the fact that they can detect infections, seal off damage, grow towards water or light, determine what season it is, etc., there is no central-clearing or decision-making point with alterable States dedicated to producing “plant behaviors.” Plants don’t really learn or remember or decide; their behaviors are a result of their natural pre-determined functioning, which reacts and/or produces different future “behaviors” more due to accident or gross physical changes – or, at the intergenerational level, due to evolutionary changes – than due to “experience.”

In contrast a relatively complex mind NOT ONLY interprets Input (say, modulated sounds or finger patterns - “speech”), compares it to previously encountered inputs (interprets the consistent past contextual meaning of said “speech”), generates a contextually-associated “understanding” of the intended meaning of the speech, makes a whole series of second-and-higher-level associations (that’s my wife’s voice; she was happy this morning; in the past when she used that tone it meant more than she is now saying with words; etc.), and makes predictions (if I continue to act this way the outcome may be similar to the previous times) – but it is a mind that has become SELF-Aware. (Thus the ‘I’ in “if I continue to act this way…”) It’s about the ‘I’, not the speech - and not abstractedness or the degree of separation between the intended communicative concept and a physical object or process. (We could argue that almost all speech includes varying levels of abstraction - even just at the grammatical level.)

The complex mind recognizes then a sense of Self vs. Other in a detailed way, over and through time and in a myriad of circumstances. This is where Recursion starts to come into play. The truly complex mind can not only compare the current (complex and evolving) present with the past and infer associations from those experiences, can not only analyze the evolving present for subtle variations and opportunities to move events towards a desired future outcome or state and then further evaluate the results of interim actions in terms of the positive or negative influence on the desired outcomes (this could almost apply to a frisbee dog gauging the flight of its target and adjusting for the wind as much as it could apply to a social encounter), BUT it can further make the (possibly false) distinction between outside events and internal states and can modulate itself as a tool for behavior control. (“I’m getting angry. I can’t afford that. I must control myself. Think of something pleasant… It’s not working; okay, depart this vicinity. But this is my boss; I can’t just walk out. Okay, think of the long term benefits. You can’t afford to be fired now. Just get through this and you will…etc.” In this example, the long term benefits do not exist, other than as a predicted or an imaginative occurrence within the mind. They exist only “in” an assumed future and may or may not ever come to pass, yet they can have great power over the state of the Mind involved.)

Basically that’s it: Consciousness of Self just requires a mind complex enough to remember and recognize its own separateness and its own continued existence over time. And of course if Awareness separates very simple minds from very complex minds, there is also a range on the continuum from less-complex to more-complex. I’d wager rats are Self-Aware on a basic level, dogs more so, humans (perhaps) most of all. Elephants? Of course. Human (or any other) infants? At first not really, but as their brains and minds develop, yes, within the limits of their species. So while the question may be open as to the abstractive ability of non-human ape minds, I have seen no one seriously propose that they are not Aware. Evidence strongly suggests that elephants and apes (at least) recognize and mourn Death - and while confusion at a sudden and complete Difference is understandable, mourning req

Near death experiences (NDEs) and out-of-body experiences (OBEs) are indeed numerous but their occurrence in large numbers proves nothing. There are neurological processes that can produce all of the subjective experiences reported in NDE and OBE testimonials. I have experienced OBEs myself during epileptic seizures.
That having been said, I have to say that it’s a bit odd to hear people inevitably start talking about computers when discussing consciousness. I think it stems from the frustration of getting nowhere for centuries with these questions. We want to talk about something we can understand. Computers do something that is reminiscent of conscious thought, maybe, and we understand computers pretty well. But we don’t understand our own minds at all.

Our understanding of mental processes is extremely crude. We know how atoms behave on a microscopic level, and we have a pretty good grasp of the cellular mechanisms that are involved in protein interactions with neurotransmitters and sodium/potassium channels. We know the microanatomy of neurons and we’ve mostly figured out how they behave, we know how they are arranged in different brain structures, and we know how those structures are all connected with each other and the rest of the body. In theory we have all the knowledge we would need to explain consciousness, but we can’t. It’s a bit like studying an immense clockwork machine, with a good understanding of what the individual gears, levers, and springs are and how they work. When you stare at all the little gears turning around, you can see what each little gear is doing. On that level, you understand what is going on. But you can’t understand how the entire clockwork can tell time, or predict eclipses, or compute logarithms, or play chess, or whatever the machine does. (If you’ve ever tried to fix a mechanical problem in your broken VCR by yourself, you know the feeling.) So people turn to computers in their philosophizing about consciousness, because all the talk about the brain has gotten us nowhere for so long.

But I can tell you from having studied computational neuroscience for a year, and from being a computer programmer now- a computer is absolutely nothing like a brain. Studying the ways computers work will give no insight into the way the brain works.

Which is not to say that using computers to simulate the brain has no merits. Although computers by their very design are far more suited to other tasks than to simulating neural networks, you can get some surprising behavior from computers when you order them to run these simulations. I myself wrote a neural network applet in Java that can perform optical character recognition; you can draw a digit from 0 to 9 with your mouse and it will (hopefully) tell you what digit you drew. This is something that usually is best done by brains, not computers; but by running a simulation you can get a computer to do it. Does that mean we know how the brain recognizes letters and numbers? We have theories of how the brain does this, but we don’t really know. I honestly don’t even know how my simulation program is doing it. So we can run a successful simulation of something we don’t understand, but then we don’t understand how the simulation is working. We’ve just moved the problem. And we’re still no closer to understanding all the aspects of consciousness.

One good example is memory. Computer memory is easy to understand. It’s a collection of bits. One bit can either be on or off, and there are a fixed number of them. Fill them all up, and there is no more memory. A computer will have no more difficult a time storing a million random numbers than it will storing a million zeroes. (That’s obviously without compression technology.)
Of course, if I ask you to memorize 100 digits of the decimal expansion of both pi and 1/3, you’re likely to stumble on pi but you’ll retain the 0.333333… with no problem. No matter how stupid someone is, he isn’t going to memorize each “3” separately- people adaptively notice patterns and take advantage of them when the opportunity presents itself. Computers don’t do this by default; they have to be programmed to recognize patterns, and even then they can only recognize the specific patterns they’ve been programmed to search for.
But the computer metaphor still pervades people’s thinking about memory, and people have an impression of “brain cells filling up” the way computer memory fills up. The impression I got from studying neuroscience and artificial neural networks is that a single memory doesn’t simply “park itself inside a neuron”, the way a single piece of information resides within an individual computer bit. Your kindergarten teacher’s name is not being “remembered” by one single cell. My guess is that a single memory is actually “smeared” across millions of different neurons, and each neuron has thousands of synapses which are the actual locations of the changes that occur to “store” that memory. Everything is very distributed. Another single memory might be “stored” across this same field of millions of neurons. In fact lots and lots of information can be “remembered” by the network before recall begins to suffer. Information is stored in the network as a whole, and not individually in single neurons. It is extremely unlike computer memory.

Probably the most fundamental reason why studying the workings of computers doesn’t provide insight into the workings of the brain is that whereas brains have evolved, computers (like all machines) have been designed. When something is designed, much care is (or should be) taken to avoid “hacks”. A hack is something that arguably works but that isn’t obvious as to why it works or as to what conditions will make it stop working. A hack is usually difficult for other people to understand- and therefore hard to fix or to build stuff onto. If you’re debugging software with lots of hacks in it, you’ll go nuts. There are standards that people are expected to follow in both computer design and software design, and deviating a lot from those standards is strongly frowned upon. The purpose of the standards is to make things as simple and easy to understand as possible for everybody.
If you ever take a neuroscience course and study the anatomy of the brain, you will quickly realize that it is full of hacks! There are redundant systems all over the place. Division of labor between different parts is spotty at best, and at worst can seem to make no sense at all. The brain works as well as natural selection has forced it to, but from the largest scales to the smallest, there are thousands of needless complications that are the result of various evolutionary accidents. Natural selection simply provides no obvious incentive for neuroanatomy to be simple and easy to understand. So neuroscience courses- and medical courses in general- are hard to pass.

Still, neuroscientists and philosophers have both been searching for the neuronal correlates to consciousness, and recently there are some things we have been able to figure out. For example, you can only give your attention to one thought at a time, and attention is most likely controlled by the thalamus. The thalamus is the central relay station of the brain. All network traffic between the cortex and the rest of the body goes through it.
The idea of consciousness as a single indivisible entity is embodied in the old philosophical idea of a “homunculus”. The word homunculus is Latin for “little man”. The movie “Men In Black” had a good illustration of a homunculus. Remember that scene where the dude’s head opens up and there’s a little bald guy insi

The thing that always bugged me about Searle’s argument is its assumption that syntax is all there is to conversation. A bit of thought (or observation during your next phone call) will reveal how lame this idea is. Those of us involved in education have observed how everyone brings a theory of the world with them to every conversation and interprets everything they see in terms of that theory. Changing the theory, the object of education, is extremely difficult in part because some of the underlying assumptions are so deeply buried that they are difficult to identify even by people who have managed to more or less excise them from their own thinking.

I suspect it is impossible for a rigid set of rules to encode responses of absolute rectitude as that would necessarily entail encoding the civilization within which appropriate responses are embedded. As that civilization is always in flux, such rules would necessarily be provisional and therefore unsusceptible to being codified. A corollary, of course, is that a truly conscious machine would probably have to be raised, rather that programmed, integrating experience into its reasoning patterns and interacting from the very beginning with other conscious beings.

Second point: I find it interesting that the response to this question focusses strictly on the attempts to implement consciousness via programming (or, actually, to implement intelligence, Turing’s point, and something which is not exactly the same thing as consciousness). One could wish for a commentary on another perspective on consciousness, that of Yale psychologist Julian Jaynes in “The Origin of Consciousness in the Breakdown of the Bicameral Mind.” I don’t know if I exactly buy all of Jaynes’ argument but it is quite interesting and well argued and speaks more clearly of human rather than machine consciousness.

Jaynes essentially argues that consciousness in humans is relatively recent (~5000 years old) and derives from a degradation in the division between left and right hemispheres of the brain – essentially a reprogramming of human behaviour under the influence of environmental pressures. I thought it was particularly interesting that this breakdown approximately coincided with a change in the nature of warfare in the same part of the world, noted by John Keene in “The History of War.” There’s more to life than computer science. There’s more to consciousness as well.


The beauty of the universe consists not only of unity in variety but also of variety in unity.
– Umberto Eco, The Name of the Rose

Thank you David for your interesting voluminous reply on consciousness. You mention that consciousness is in fact self-awareness and that a sufficiently complex information processing system (say, a mind) may produce this state.

Then you go on discussing what in fact concernes intelligence, not necessarily consciousness, but finish interestingly again stating that some animals such as elephants and apes might have self-awareness too.

I want to first address the issue of animal self-awareness and then come back to consciousness in general.

I vaguely remember an article about an experiment with apes and a mirror. Some tests would be done to see if the ape recognized his image in the mirror as himself or as some other member of the same species.
It is clear that most animals recognize members of their own species. Of course, they need to to find a partner. But birds for example will mistake their image in a mirror for some other bird. Many people with a pet bird hang a mirror in its cage, so that the bird doesn’t feel alone. This works pretty well. Apes however recognize themselves. The article goes on to argue that apes therefore are self-aware.

The simplest system that would recognize itself in the mirror would be a simple robot with visual capabilities, that could recognize its own shape from any angle and could verify its own movements. So, is self-recognition-in-mirrors the same as self-awareness?

This requires a thorough examination of the concept consciousness itself.
We know (or rather we defined) that people are conscious, and that ALL people are conscious (except maybe certain psycologically impaired or very young people, but let’s not go into that). To know what consciousness is (or any concept) it must either be defined (in which case there is no discussion: consciousness is a unique property of human beings and that’s it) or one must be able to classify all objects as having this property or not. To do this, a classification method must be defined.

Searle tries to define this method by checking the output of a system, by conversation. The checking party itself is a human being, not the most objective classifier known. Not only is the deciding party unrelyable, the method demands that the tested object produces output. This means that any person inable to produce output due, for example, to paralysis cannot be conscious. Obviously, this is not correct. Furthermore, any system that does produce output would have to be understood by the classifier. If the tested object speaks some language no-one understands, does this mean he/she/it is not conscious?

Other people try to detect consciousness by observing the system itself. For human beings, the brain or the brain functions. As David argues, these processes must be sufficiently complex to produce self-awareness. But why? Is complexity a premisse for self-awareness. There is no evidence whatsoever that leads one to believe this.
A complex eco-system like an ant-colony is a very complex system, but is it self-aware? Douglas R. Hofstadter, in his book “Gödel, Escher, Bach” (a very interesting book by the way) raises the same question.

I’d argue that complexity is not necessary for consciousness. This would not leave any possibility I can think of to test for self-awareness. That’s a somewhat unsatisfactory, however interesting, conclusion.

So let’s focus on methods evaluating the output of a system. Let me put an example forward that produces understandable output. I could write a simple computer program that to any input invariably produces the output: “I am aware of myself”. The program does not show very much creativity, but I argue that creativity is not necessary for consciousness. Neither is it intelligent. It just states that it is aware of itself. Now, looking only at the output, does this imply self-awareness? The message is clear enough: “I am aware!”, so why doubt? The system produces the same output as an uncreative, unintelligent and unwilling person.

Sander writes:

and

and

Consciousness and self awareness are probably continuums. Certainly, the ability to recognize oneself in a mirror is on the lower end of this continuum, but it probably does not match what most of us mean by “self-aware”. When most humans see themselves in a mirror they recognize that they are an individual, with independent thoughts. They are able to contemplate what it means to exist and to interact with their environment. Your simple robot certainly would not have thoughts about individuality or self-ness. The ape is probably somewhere in between (contrary to what Nickrz will allow himself to believe).

David said that a sufficiently-complex Information Processing System is apparently requisite for self-awareness. He did not say that complex systems are self-aware. However, I take your point: Consciousness may not be a function of complexity.

Well, in this example, you told it what to say, so that’s not valid. However, if it arrived at that conclusion on it’s own, then I tend to agree - why doubt?

I agree with David on most everything he said, except that he indirectly connected consciousness and self-awareness. I think the two are possibly independent. Imagine a world where there is only one individual. This individual could be conscious, by all the definitions that we care to invent, yet he might not be self-aware because to acknowledge the self you must also acknowledge another. Without another, he might have no concept of self.

I’m not a very religious person. But, I believe we are all more than the sum of our parts. Last year, a group of scientists concluded, after a 10 year study on consciousness, that they were completely stumped. They were no closer to an answer than when they began the experiment. Maybe there are things we are not intended to know. Consciousness, and even life are beyond anything we can imagine, or resolve. Let’s try to understand, and live with the universe around us first. Then, we can deal with the questions that live on the edge. Leave the philosophical questions to the guys that hang out on mountain tops for now. We have a lot more to understand before we get to the BIG questions.