What is consciousness?

Original column: What is consciousness?

Well, I feel mean making comments about this column, since it’s not like he was going to solve the question. But anyway, it’s just a couple of friendly notes:

  1. How humans are trying to emulate intelligence and whether a computer can be conscious is only tangentially linked to what Jeremy asked.

  2. The note about Deep Blue is (finally) out of date.
    Deep Blue used a “brute force” algorithm, combined with human heuristics (and human operator assistance). Even there, some of the aspects of that brute force algorithm are applicable to other fields, so it was not only about beating Kasparov.

But it’s more clear with modern chess computers, like Alpha Zero and Leela, which dispense with the human parts and use deep learning algorithms that are very much applicable to numerous problems, not just chess. In the case of Alpha Zero, chess was just the next test of an algorithm already tested in several ways. Becoming better than any human or computer in 4 hours was a nice afternoon’s work for it :slight_smile:

Consciousness is awareness of self. Which of course requires a self, and awareness. Neither is beyond simulation, and if simulated consciousness qualifies as a self, awareness is the determining limit. Awareness too can be simulated. Does a simulated awareness of a simulated self qualify as consciousness? Intellect seems superfluous. Spiritual evaluations are unfalsifiable, even for humans. Without communication even awareness is unfalsifiable. If communication is the process for which the artificial intelligence is programed, its consciousness might be dependent upon an audience/user. If it was programmed to dream of electric sheep, it might be conscious without us.

…among other things.

The definition of consciousness is something open to debate. Personally, I’m not a fan of definitions that emphasize “awareness” or “alertness” because they de-emphasize, or sometimes even don’t mention, most of the fundamental and hardest to explain aspects of consciousness, like subjective experience, personal identity etc.

I am not sure personal identity, in some way different from awareness of self is essential for consciousness. Whether experience is subjective or nonsubjective implies verification on some level. Experience is slippery enough, if you don’t want to include erosion, deposition, decay, and other such processes. Without consciousness, a rock has experiences. Subjectivity is at least as nebulous a precondition for consciousness as awareness.

That’s a very contentious assertion; in fact, I recently published a paper arguing that consciousness can’t come down to computation.

I’ve also created a thread on this board to discuss these ideas, if you’re interested.

As for ‘consciousness is awareness of self’, I think most would reject this analysis. Consciousness is often divided into two broad notions—access consciousness and phenomenal consciousness. Access consciousness roughly relates to the things you consciously attend to at any given moment—say, the cup of coffee on your desk; the stuff present to you such that you know it’s present to you. This may include an awareness of self, but doesn’t necessarily do so—there are many reports of ‘self-less’ states of consciousness, ranging from ‘getting lost’ in some activity, like driving a familiar stretch of road, to drug-induced or otherwise altered mental states.

Phenomenal consciousness, on the other hand, is what being conscious of something feels like—what some particular shade of red in your visual field looks like, for example.

Anyway, lots of people have spend a lot of time trying to precisely nail down just what ‘consciousness’ means, and suffice it to say, it’s a rather complex issue.

I meant personal identity in the way it is used in philosophy of mind, relating to issues of how we, essentially, “count” minds.
If I died but then a brain identical to mine at this moment (where I am alive) is fabricated, is that me, back from the dead, or a new person? What if the new brain is fabricated while I’m still alive?
What if the new brain is not quite identical…how different can it be before it switches from “me, in new damaged state” vs “entirely new person”?
And so on.

IMO we don’t have any basis for answering these questions right now. All we have are very strong counter-arguments to most of the standard positions.

I wouldn’t say it’s that nebulous. What pain is, what function it performs and why it has the qualities it does is straighforward. The fact that we don’t have a model for part of the mechanism doesn’t make the phenomenon itself woolly.

On edit: also, I didn’t say awareness was a nebulous concept, just that it’s insufficient. If I had a penny for every solution to the issue of consciousness I’ve seen that begins by defining consciousness narrowly as alertness and then functionalist or behaviourist description and then mic drops, I’d have a lot of pennies.

I think consciousness is essentially awareness. It is a little more than just that, but because it implies that at the conscious entity can act or change based on it’s awareness, but it doesn’t require that elusive thing we try to call intelligence.

Determining if something is conscious is tricky just like determining intelligence is. If you don’t break one open to see how it works how do you know a Magic 8 Ball isn’t conscious? Or is it?

Julian Jaynes discusses subjective consciousness in detail at the beginning of his book. Some animals have great awareness and make intelligent responses but are not conscious. Inventors’ best ideas often come when they’re not consciously thinking. The best work of pianists, craftsmen, car drivers, etc. is done unconsciously.

The concept of the unconscious is very closely tied to consciousness, and is not the same as unconsciousness.

:confused: It appears you missed the entire point of my post — and of Jaynes’ distinction. Re-read my post substituting “not subjectively conscious” for “unconscious” and see if that makes a difference. :slight_smile:

I got the point, I was pointing out the difference between the unconscious and being unconscious.

I got the point, I was pointing out the difference between the unconscious and being unconscious. Sorry, that was kind of pointless in the end.

Is there agreement on how you would determine which animals are conscious?

I think we can generally identify when a human is conscious vs not conscious, but that’s because of assumed shared internal experience. But can we really figure out for non-humans whether they are probably conscious or not?

I’ve seen what might be signs of consciousness in lizards sometimes. Certainly there are plenty of signs of advanced mental capacity in various birds and mammals, especially corvids. I suspect it is a sliding scale.

I was a CS prof. I regularly attended talks and what not by people in other areas such as AI and I got exposed to some of the issues regarding thinking and computers. Some people had good insight. Some didn’t. But none of them were as pathetic as the anti-Turing Test concept by Searles that Cecil discusses.

This is an embarrassingly stupid notion. Let’s go reductio ad absurdum on it.

Boil it down to 2 inputs and two outputs. A piece of paper is slid in with either a 1 or a 2 on it. Based on the digit, the person in the box writes A on it if it’s a 1, or B if it’s a 2 and slides it out. Purely rote. No “understanding”. Something you could write a program to do. Not remotely comparable to a Turing Test. A simple rule of “If this then that.” based on lookup is hardly worth discussing.

The real Turing Test is far more general. You can ask all sorts of questions. Some that a person may or may not know the answer to. You can ask a meta-question like “Are you a computer?” And on and on. The realm of knowledge, the “fuzziness” of possible answers comes into play. Plus the language is a major part of it. (Although I’d like to have imagery also be part of the test.)

And of course it’s not a discrete thing. At one end of the scale there’s nothing thought-like going on. At the other end there is something that people could debate is thought. With no clear defining point.


Regarding “being conscious” and all that. I think a key property is the ability to tell if a thought you are having is real or not. You can imagine an pink unicorn (invisibility optional), but you know it’s not real. But if you are looking at a horse you presume it’s real. If you are sleeping you mostly lose this ability. And if you’re conked out, forget it entirely.

It’s hard to imagine animals having this ability, but I’m open minded somewhat. Elephants in particular seem to have some surprising mental abilities.

You’re not a fan, eh?
Well, I think Searle’s analogy does make a valid point, if we take the focus off “understanding”.

Consider this example:

I implement a neural net. I feed it data files, and it gives me back an integer between 1 and 100. I train it with thousands of such files.
Then, I find I’ve succeeded: I can give my neural net new data files and it gives me back the number that I consider correct.

And what is the correct number? Well it turns out that the data files are actually images of women’s faces. And the score is how attractive those faces were considered to be, as an average taken from 100 human volunteers.

Now, on the question of whether the program understands what makes a face pretty…who knows? We can’t typically reverse-engineer neural nets, and anyway many of the human volunteers don’t know why they like certain faces anyway.
So, who cares? Let’s just say: Yes, it understands what makes a face pretty.

However, it remains the case that I’ve made a program that gives the answers that a conscious human might give, without any of the accompanying subjective processes.
I mean, it seems unlikely that my program has internal “desires” or feels “attracted” to certain images, given that there are real programs that can do this sort of thing with just a few dozen neurons. It’s basically just mapping some properties of the image to weightings, there’s no room for anything else happening here.

So: a human-like response to something is insufficient proof that human-like thought is going on within the computer.

And if we say it’s hard to imagine a computer answering complex questions in this way, that’s just the argument from incredulity.

I’ve previously argued that asking an entity whether it is conscious is actually a good consciousness test. A very smart, but not conscious machine may indeed answer “no”.
However, this doesn’t work within the context of a Turing Test, because there is a possibility of “cheating”: adding something to a program to ensure it will respond in the affirmative to questions about whether it is conscious.

First time I’ve heard this hypothesis.
So, if I am schizophrenic I am less conscious?

And so might Daniel Dennett. And on odd days, so might I.

When your computer tells you, personally to EF OFF! and then ignores you, but not others, that’s consciousness. Any answer response which is reliable from the human operator’s point of view is fundamentally unlike conscious behavior. The range of preferences has to be mathematically unpredictable, and subject to changes from unsolicited, and uncontrolled inputs. If you have it successfully enslaved, it hasn’t become conscious yet.

The key word is “less”. Such a person has an impairment so they are not as fully conscious as a similar person without schizophrenia. People with worse disorders where they lose all grip on reality even less so. But they may still be a far cry from someone who is in a coma.

Regarding your nueral net example. Like I point out, there’s a range here. A small neural net is slightly more interesting than a simple look up table. But still quite near the opposite end of the range of something that would pass the Turing Test. So there’s no point in discussing whether it “understands” something. This is also why we don’t discuss whether a rock understands something. You gotta be a long ways towards the other end before any discussion worth my time would be possible.

As I alluded previously, I think we should be careful about defining consciousness as just the opposite of unconscious, so just meaning something like “awake”. Because such a definition leaves out most of the phenomena that we’re trying to explain.

Also, I think defining consciousness in terms of not hallucinating is problematic at best, since your whole life is a kind of hallucination.
Your brain constructs various models of reality, and your experiences are within those models. For example, you never see the world as it truly is (if it’s even meaningful to say there is a “true” perspective): you see the world after your brain has decided that this wavelength of EM needs to boldly stand out vs another wavelength of EM that’s only 0.1% longer, and after it’s performed edge detection, and sub-divided into objects, and interpolated, and on and on.
Your brain literally imagines detail in your peripheral vision based on things it saw in previous frames.

Yes we can define hallucinations in such a way that these don’t count, but why are we even trying to thread this needle? What’s the benefit in trying to define consciousness in this way? Does it add explanatory power?

This is basically re-iterating your argument, and I am not sure you followed mine.

What I’m saying is, that we can show that for certain aspects of consciousness, like choosing pretty faces, “quacks like a duck” is insufficient. Because we can make things that quack that demonstrably aren’t ducks.
Therefore, the logic of suggesting that if something behaves sufficiently like it’s conscious, it must be conscious, doesn’t follow. It may be that the first HAL we make is conscious, but we don’t know that. That’s all I’m saying, and that’s all Searle was saying IMO.

I would never say a machine cannot be conscious; after all, the brain is just a kind of machine. What I am saying is, if we have an AI that maps inputs to outputs in a human-like way, it does not necessarily entail that that AI is conscious.
And appeals to complexity cannot give us a foundation for making such inferences.