Anyone else who doesn't accept that they are conscious?

I don’t really get the fixation on lookup tables. It seems to me that any system that truly imitated a human mind to the point of passing the Turing test would require a set-up very similar (if not identical) to what goes on inside people’s minds right now. And I’m talking about systems with a possibility of actually being built.

So obviously the human mind is more than just a pre-defined mapping between inputs and outputs. You could define a person’s mind as some function, yes, but that has nothing to do with consciousness. Consciousness is an emergent property. I imagine that if you inhibit part of the brain using drugs, you’ll find the subject will report decreased consciousness. (Try getting drunk sometime.) This suggests it is measurable, is increased by greater brain function and emerges from the complexity derived from the interactions of components within the brain.

That’s not at all clear. Even ELIZA (1966) passed the turing test for “unsophisticated interrogators”. Pretty formidable chatterbots that win bronze in the Loebner prize have no real financial backing and extremely limited table-sizes. Again, I’m not contending that the human mind is a look-up table (although I think it definitely shares a lot in common, and I think most casual conversation is essentially the result of a look-up table). I am only “fixated” on look-up tables because they are so accessible in terms of an easily explainable AI. This thread is really not the forum for discussing anything more complicated.

The subject may report sleepiness, hallucinations, dysphoria, etc, but in going about this study, how would you instruct the patient to define his ‘consciousness’ level? The list of clinical terms goes on and on, but ‘consciousness’ is not one of the self-descriptors. Ultimately, whatever you would be studying would not be what is under discussion here. For example, if the patient is drunk, in a well-controlled study he may describe ataxia, blurred vision, slow response time, poor recall, … but the only way he would use the term ‘consciousness’ would be colloquially, or if the term was defined narrowly.

But a conversation is not a single exchange. And the responses given in one exchange affect which responses are acceptable in future exchanges, which quickly produces a combinatorial explosion.

For example, the simple question “Why do you say that?” can have a tremendous number of responses, depending on what came before. You can’t just think in terms of “if asked Y, say X” because the system has to factor in all the previous Y’s and X’s into its calculation of an appropriate response. In fact, the precise place that chatterbots break down is their lack of memory. The can give a convincing response to a single query, but can’t sustain a conversation because they fail to model topic and context. Ask ALICE why she said something and all she can do it respond with a canned response that deflects the query. You can never get a “meta-answer” that expands upon a shared understanding of what’s being discussed.

I would argue that conversation is impossible without computation. Both parties need to be running fairly sophisticated simulations of each others’ mental processes in order to interpret incoming communications and anticipate the communicative impact of each utterance. The meaning of a phrase is not a fixed quantity. It varies wildly given on our understanding of the mental state of the speaker, and any conversation system that fails to take this into account will quickly break down and reveal its artificial nature.

Note that I’m NOT claiming that consciousness is a necessary condition for conversation. Only computation.

I’m confused. Do you believe consciousness exists, or not? Because this is a sentence with two mutually contradictory claims. “I don’t believe consciousness exists - except as a word used to refer to consciousness” is what you just said there.

No. A large-enough look-up table could replicate the statements of everyone who has ever lived, possibly. But in order to replicate consciousness, it’d have to replicate not just external statements, but every internal statement by every member of the mind-gestalt.

And replicating conscious statements is no great trick - it would also have to replicate all possible unconscious statements, in order to replicate consciousness. Because there’s a large unconscious component to consciousness, paradoxical as that sounds.

I agree that there is not enough computronium in the universe for such a look-up table.

It may be meaningless to you. it may not be meaningless to the typewriter.

Since it is the consciousness of the typewriter that is at issue here, it is the meaning of it to the typewriter that counts. You’ll have no difficulty convincing ME that the typewriter is not, in fact, conscious. But that’s not relevant.

You’re still (consistently and persistently) looking for “evidence”, i.e., external reason to consider someone or something conscious, and deciding that from the outside you can dispense with it as a meaningful consideration. But that, also, is not the question that you pose when you ask whether we have reason to believe that WE are, in fact, conscious.

‘Consciousness’ is a word. It can be defined however one pleases. I have to make clear which definitions I am not rejecting, in order to delineate that which I object to. I can’t state what I object to directly, because “subjective experience” is such an incoherent concept that there is no way to describe it. BTW, what you quoted me as saying was not at all what I wrote:

I’m saying that I do believe in ‘consciousness’ as defined: some complicated process that results in humans claiming to be conscious. That is not self-contradictory at all.

I agree that in order to reproduce “what looks like consciousness” some computation and a short-term memory are essential. But 1) I don’t see this as involving nearly as fantastic a theoretical hurdle as you do, and 2) I think it is rather telling that even with something as utterly simple as a look-up table, “untrained” people can be repeatably fooled into thinking they are conversing with a “conscious” person. But in any case I have no idea whether sophisticated computation is required for something that, to me, doesn’t exist in the first place.

I agree with everything you wrote except for the initial “no”.

Once you start talking about “replicating unconscious” statements… you are getting into such incoherent territory that I cannot even in principle falsify anything you are saying. Either way, what you are describing is not an exploding number of word-permutations or anything of that nature – you are simply describing the linear addition of humans external and internal exchange of language. Suppose each person has the ability to parse, internally or externally, 10 words per second (this is pretty realistic as far as I understand it). Then suppose each person lives to 75. Then suppose 100 billion people have lived. The math works out to about 10^21 bytes of information. I agree it’s a lot, but it’s not more than can be stored inside the universe.

A typewriter is just a typewriter is just a typerwriter.

But you, being deterministic, have no choice about how you answer whether or not you are conscious. So what is the relevance of your stated opinion? However I am curious how many other organic computers out there, who also have no choice in the matter, do not insist that they are conscious.

Only because, being untrained, they don’t have prior knowledge of the obvious commonplace questions that break chatbots. There’s nothing esoteric about asking someone “What did you mean by that?”

Part of the reason that chatbots are convincing at all is that human beings, when conversing with each other, maintain mental models of the other person’s cognitive processes. So we naively start off trying to fit a chatbot’s responses into our existing schema for an intelligent interlocutor. And for a short run of small talk the illusion can be maintained. But after a while the bizarre non sequiturs become harder and harder to accommodate in our cognitive model of our conversational partner until finally we realize that we’re dealing with a far simpler system than a human brain.

Sophisticated computation is required to mimic the behaviors that people mistakenly call consciousness.

Knock yourself out. Unfortunately, other duties call, so I’m going to have to bow out of the discussion. Happily, I see others are making the same point I was.

You make some good points, and I cede that chatbots are not a great practical example. I do still contend that they are good theoretical example, even if (see recent post) it requires 10^21 bytes of information to be stored and retrieved.

I’m unsure about the ‘sophisticated’ part (a mostly irrelevant side argument), but otherwise this is a statement I can get behind!

There’s nothing linear about it.

But I, not being deterministic, do have choices about how I answer my questions.

Determinism is one way of looking at things and why they happen. Intentionality is a different way of looking at things and why they happen. It is not true that determinism is somehow the “real answer” and that intentionality is an illusion.

The real illusion is the illusion of separate events. The universe is a single solitary event of 12-15 billion years duration and counting; there has been no other, and that one has no prior cause.

We find illusions to be useful and necessary; let’s call them simplifications. At times it makes sense to conceptualize in terms of prior causality (determinism) and at times it makes sense to analyze in terms of intention and planning (free will). When studying external phenomena and making predictions about the behavior of systems and objects, determinism is most useful; when studying other people and predicting their behavior, both models have their uses; when carrying out our own individual lives, acting with intentionality and making decisions is (oh so paradoxically) mandatory.

I skimmed through the various posts and it strikes me we don’t have a definition on consciousness. If we don’t have a clear definition it is pointless trying to argue whether it exists or not.

If we go with “subjective experience, awareness, the ability to experience “feeling”, wakefulness, the understanding of the concept “self””

Then I submit I experience that with no doubt. They key is that I can only ascertain that I experience it.

MrDibble – what I responded to certainly seemed to describe the linear addition of external statements and internal statements. No combinatorics. You replied without explanation, so I don’t really know what your argument is, but I suppose you are somehow suggesting that in order to describe the internal exchange of information that constitutes ‘consciousness’ , one must do more than simply record the information that is exchanged.

AHunter3 – I am totally confused about where you are coming from. Do you or do you not believe you are deterministic? If not, then we are at a fundamental impasse, and there is probably no point arguing for or against determinism in this forum. That is another can of worms.

smileybastard – we certainly don’t have a definition we can agree upon. I contend that defining ‘consciousness’ as ‘subjective experience’ is tautological (what is your definition of subjective experience?).

Well being that it is your post and you are making the claim I suggest you define it in a clear way. We may disagree that in general that means consciousness, but we don’t need t for the purposes of this debate.

What I suggested was simply fro the dictionary as a starting point, but again you are making the claim and it seems to me you have something in mind.

For me it is being aware of myself and that all my experiences relate to this self. It is the opposite to the lack of this I some times “experience” when I am asleep or similar.

As I said in the OP:

I don’t believe any definition which goes much further than the above can be logically consistent and self-contained (and non-trivial). You can’t ask me to define a concept that I am arguing is fundamentally incoherent and which can have no meaningful definition.

See my responses in posts 102, 216, 307

If that is your definition then of course I am conscious. I think your heading is misleading.

I realize we are simply a mass of atoms and energy. Simply because I don’t understand how this lets me be aware of myself and my experience does not mean I dismiss it. I am very certain I am aware.