What Is Consciousness?

I think a way to identify consciousness from the outside is whether an individual creature makes its own determination how to react in a given situation, as opposed to all members of the species using a similar response. Bees have the ability to communicate information through their dances, but they all do the same dance. I’d call that instinct. One of our cats will take a step backward and allow you access to refill the food bowl; the other two will let you pour food over their heads if they happen to be eating at the time. Cat #1 is definitely more intelligent; but the fact that they respond to the same situation in individualized ways tells me they are conscious.

So what are the criteria of consciousness? Some items mentioned above:

Awareness - ability to evaluate ones environment

Communication - a structured system for deliberately exchanging specific information

Anticipation - the ability to act on information whose reference is outside of the immediate environment

Interaction - the knowledge that others will recieve your communication and act in accordance with it

Adaptability - the ability to deliberately alter ones environment in order to improve survivability

Crane

In reviewing #22, it occurs to me that the flight computer in a commercial airliner can meet all of the criteria.

So, what’s missing?

Crane

A flight computer is not self aware.

I agree.

So, what criterion could be added to the list to correct the error?

Crane

Change awareness to self awareness. I don’t think awareness, as in the ability to evaluate the environment is really demonstrating consciousness, since basically every living thing does this already to one degree or another.

Contrary to some posts above, I think that awareness is mostly a red herring when it comes to consciousness. The reason is that you can be aware of some things—in the sense of, the presence of those things modifies your behaviour—without there being any conscious experience associated with them. Taking off your jacket clearly shows you’re aware of the temperature, but it’s not necessarily the case that you have any feeling of heat you’re currently attending to consciously. Driving down a long, familiar stretch of road, you’re clearly aware of what you’re doing (you’re keeping on the road, for one, and make the right turns), but little of this needs enter your conscious experience (most of us have experienced sudden confusion of what point of the route we’re on at the moment—did I already pass the junction with the big red house, or is that yet to come?).

I’ve come up with (so far) three necessary elements of what I’d call conscious experience, at least human conscious experience as we know it (this list is very, very provisional):

[ol]
[li]Phenomenality[/li][li]Intentionality[/li][li]Universality[/li][/ol]

By phenomenality, I mean the subjective feeling of there being ‘something it is like’ to have a certain experience, say the ineffable redness of red, or the painfulness of pain; philosophers often use the term ‘qualia’ to describe these sensations or ‘raw feels’. Intentionality is the property of most (if not all) mental states of being directed at a certain object: when I see a red rose, my experience is one of the rose; when I think about a friend, my thoughts are directed at that friend; and so on. Some people have challenged the notion that all mental content has some degree of intentionality, but honestly, I’ve yet to be convinced.

The final point on the list is perhaps less familiar to those who know their way around the philosophy of the mind: by universality, I mean merely the (unique, I believe) capacity of human minds to consciously manipulate arbitrary chains of symbols in a rule-bound way. This gives us our language capacity, for one, but also accounts for our abilities in mathematics, and generally, of forming abstract concepts and manipulating them. The term comes from theoretical computer science: a machine is computationally universal if it can emulate any other machine; the first universal machines were introduced, along with the terminology, by Alan Turing, and essentially were designed to manipulate symbols on a long piece of tape. A universal computer has the capacity of being able to compute anything that can be computed at all; this includes, for instance, simulating physical processes. Thus, I think universality is behind our capacity of grasping the physical world.

A contender for a fourth item on the list would surely be the self itself, or our sense thereof. But I’m not really sure that this is an independent entry: basically, I think of the self as being, in some sense, a necessary construct, woven from the other three. Universality I think, may be the key: it gives us the power to tell stories about ourselves.

Regarding intentionality, which for instance represents concepts or objects in the world, seems to only make sense if there is some user of these representations, some entity that they represent things to. But if we postulate some such entity, we run right into the homunculus problem: if there is a separate sort of agent in the head to whom these things are represented, then how does this representation work in its ‘head’? Is there another little homunculus, and another, and another? So I think that in a certain sense, we have to make do with the representations in themselves, with the self just being implicit. Likewise regarding phenomenology: if an experience feels like something, then there has to be someone to whom that experience feels that way. So in that sense, there can be no free-floating phenomenology; instead, it seems to imply some kind of ‘self’.

This is not how “adaptability” is typically used, it is more often used to mean modifying the actor’s own behavior patterns to optimize its own ability to survive/thrive in a changing environment.

Survival instinct. The airplane computer’s own existence is not imperiled, in that the individual program is non-unique and can be replicated ad infinitum. With biological organisms, even cloning is not known to be a means of exact replication.

Survival instinct is easily programmed into computers and easily removed from human consciousness. A fire control computer has a high level survival instinct. A suicide bomber does not.

The computers developed for driving cars (not just navigation) are highly adaptive and are able to generalize in order to provide unique responses to unfamiliar situations. Their responses are not replicated ad infinitum.

The equivalence in humans seems to be that we are all the same brand of computer, loaded with unique cultural operating systems and running different software.

Crane

Self-awareness nonetheless originates in survival instinct. A nightcrawler will squirm away from you very quickly because its primary concern is to not die. It has pretty negligible “consciousness”, but it has enough to be aware that it does not want to die.

Suicidal people have mental/emotional issues: the instinct is still there, it has just been subverted or suppressed. Note that in modern times, most suicide bombers are not concerned about the destruction of their bodies, because their immortal soul will soldier on in the hereafter. This is where we see the “soul” idea coming from: we really do not want to cease to exist, so we invent this ethereal thing that lets us cheat the reaper.

But their object code is the same. Their responses are not their programming, only evidence that it is working. If you put two different cars in the same situation, with the same programming, they will calculate and effect the exact same response, every time. Adaptability is not an indication of uniqueness.

No, we are all running the same software. The data upon which each of us base our calculations will naturally differ slightly, but the code is the same.

If you touch a hot stove and pull your hand away, is your hand aware in any sense?

My old Saturn had a computerized transmission that learned my driving habits. Two cars with the same base programming which had been driven by two different people would respond differently in the same situation. There are plenty of programs which learn without changing their object code.

? Our software depends a lot on our brain wiring which depends on our genes. Not to mention that our brains change as we grow and experience things. I wrote self-modifying code in high school (I’m not proud if it) but animals evolved self=modifying code long before that.

I agree. The difference between my consciousness and my subconsciousness is exactly self awareness. When I try to solve an anagram consciously I have access to all the steps I go through. When I solve it subconsciously I have no access to this.

It seems like it would be difficult to gain much value from this one, whether correct or incorrect.

Could we take some AI and test for this? Seems tricky.

In Gödel, Escher, Bach, mentioned upthread, Achilles and the Tortoise have a conversation with Aunt Hillary, a sentient anthill. It is emphasized that individual ants are not sentient, and Aunt Hillary, being rather squeamish, prefers not to think about them at all, even when she has her friend Dr. Anteater come by to do some surgery for the sake of her health.

Kind of like the Borg Collective, hey? A shared consciousness of an entire species.

I can see a corollary with the Internet, one example of the collective consciousness of the human species. Take memes for example. They spread like wildfire and those who get them, get them, often sharing the same sarcastic sense of humor. We get it intuitively because we can relate to that mental state, that consciousness, that created the meme in the first place. Often the younger generation often recognizes memes or the humour therein quite readily while the older generation may not, thereby allowing the younger generation to have one up on the older generation. It is a kind of evolution designed for survival, not designed by anyone in particular (not even Steve Jobs) but rather created by the human species from a shared consciousness.

Flight computers don’t have subjective experiences. They don’t know anything you don’t tell them. They don’t learn or, as far as I know, adapt to changing environments or plan ahead. They don’t have a will or personality. They won’t one day say, “Screw it, I don’t feel like flying to London. Time to do loop de loops!”

We don’t have computers that approach the behavioral complexity of an insect. Right now we can make machines to brute force specific problems, like circuit designs or playing chess. They can do these things better than we can but they’re dumb as rocks.

I’m curious if we’ll make a conscious AI first or we’ll just keep advancing until we make an artificial p-zombie human robot. Or if we could even tell the difference.

That last sentence is the key problem. I’m pretty sure we’ll “brute force” through the Turing Test, creating non-conscious AI systems that can fool us into thinking they are conscious.

I’m not familiar with the term p-zombie – what’s that? (Or…I could just Google it, and find that this was what I was kind of describing! A system that mimics consciousness, or sapience, or personhood, but does not actually possess those qualities.)

I think most of us here will live to see this wonderful (and dubious!) achievement.

I got the impression that Crane was referring to autonomous automobiles, like those StreetView cars. Such vehicles do not have to do any adaptive learning, only responsive calculation. For that, a single consistent collection of interwoven algorithms is all you need, for every car. All they need is the GPS itinerary to go where they need to go, everything else is merely dealing with whatever crops up.

There are two things to consider here. The first is that the gigo of a complex process structure can be greatly affected by the gi, much of which will be in specific .dat/.plist sources (categorized memories). What I mean is that the result of a process can just as easily be altered by the reference data it draws from as by altering the actual coding. I submit that this accounts for a great deal of the diversity in observed human behavior – and some animal behaviors as well.

The second thing is that I thnk we can be fairly confident that serial instruction processing is only weakly analogous to how our brains work. There is a great deal to learn about the structures and mechanisms that give rise to our behavior. The manner n which we learn things suggests to me no “self-modifying code” in the classical sense but something more like embedded-function instantiation that amounts to not so much modification of existing code as creation of new code. Data and code fit together in ways we have yet to suss out.

This is one of the more vexing questions of AI awareness: is there any test we can present that will conclusively establish unsimulated self-awareness. Perhaps, if we ever get close to this “singularity” chimera, there will be better tools for measuring this sort of thing.

There was an interesting article in Scientific American, some months ago, where the authors suggested that “awareness of context” would essentially be definitional of “intelligence.” If you and I are talking about, say, jet bomber aircraft, and the talk turns to bears and bison, then we, having intelligence, understand the context. A traditional computer, even with a dictionary that notes that Bison and Bear are code-names for old Soviet bombers, might not make the leap of understanding, but an intelligent one might.

The SA article suggested that comprehension of puns, among other things, would establish intelligence. Also, only an intelligence could keep up with a sudden shift of context – say, that’s a really nice set of pumas you’re wearing. A conventional language parser would have no idea of what just happened.