Anyone else who doesn't accept that they are conscious?

So the guy has an abnormal memory. (But now we’re beginning to approach the objection I think does work, which I outlined above. But we’re not there yet. Real people can have abnormal memories.)

As you know, Searle was writing in the 80’s, and originally, was directly addressing a particular implementation of AI which was definitely algorithmic in its approach.

Searle does say that insisting on a hardware orientation and particular kinds of sensory connections to the world and so on is to give up something of the account he’s writing against. In that sense, you could say that the move to these kinds of approaches was implicitly an aknowledgment that Searle was right. (Except I’m not saying he was right…)

An emulationist approach is definitely vulnerable to Searle’s objections. It’s an attempt to create a mind simply by giving a computer the appropriate program. Just give the Chinese Room that program, whatever it may be, and Searle’s thought experiment goes through.

As for hardware-oriented approaches, I don’t know enough about them to say anything other than this: If they assume computation is sufficient for understanding, Searle’s objection hits them dead on. The implementation details you want to highlight are irrelevant–computation is computation. If the implemenation details do matter, then it’s not just a matter of computation, and Searle is not objecting to such an approach with the CR experiment.

Though I introduced the phrase in this conversation, the target of his argument isn’t "any and everything called “Strong AI” but rather “the view that computation is sufficient for understanding.” That’s not a straw man, it’s a claim that people explicitly made at the time, and anyone today who says something like “ultimately, the mind is a digital computer” is making the very claim Searle was aiming at, no matter what hardware they think is required to implement the particular digital computer they have in mind.

Would I be correct in restating your objection as: “Programs are not just syntactic”?

That is, that any sufficiently expressive language necessarily exhibits semantic structures (even if not made explicit)?

I don’t see how that’s an objection. Is such a program one anyone would seriously propose as the one that should be implemented in some computer or other in order to make that computer understand Chinese?

Of course it is. If it’s implementable in a computer program, it’s implementable in the instruction set.

Searle’s argument in the CR experiment doesn’t really rely on any notion of original or derived intentionality or anything like that. Those concepts are not mentioned in the argument.

Person in the room implements the correct program, but does not understand Chinese. Therefore, the program is not enough to make a system implementing it understand Chinese. Simple as that. Nothing in there about syntax or semantics.

I’d say so, yes.

Searle doesn’t specify that the CR instruction set would be the same as one run on another system.

Not all semantic relationships are implementable in a computer program. Some are hardware-dependent. But I agree with your main thrust - yes, Strong AI as Searle is arguing against isn’t something I’d care to defend.

It’s the very idea of the CR experiment.

I don’t think his article is online anywhere but I’ll look it up as soon as I can.

But any hardware making up a digital computer is itself emulatable in software.

Well then. Okay. :wink:

How does one go about “thinking” about whether or not they are conscious if they aren’t conscious in the first place? Specifically, what does it mean for a non-conscious entity to “think?” The notion of an unconscious entity “thinking” is nonsensical unless we reformulate the definition of “thinking,” since the word contains an implicit reference to the notion of experience.

Specifically, what does it mean for experience to be an “illusion?” Does one experience the “illusion” that they are having an experience?

If not, specifically how do you formulate a definition of “illusion” that is independent of the assumption of experience?

What does it mean for you to “believe” that you are not conscious, if you are not conscious in the first place?

The conventional definitions of “belief” and “illusion” are formulated under the assumption of experience. Human language, itself, draws associations between symbols and phenomenon that we collectively term “experience.”

Of course one cannot build a non-circular definition of “experience” from human language. Embedded in human language is the implicit assumption of human experience.

Any assertion of “rejecting” or “disbelieving” in experience is not self-consistent, since these words are used to describe experiences. You may say that I just don’t “get it,” but “getting it” is also defined as a human experience.

If I assume that the notion of “experience” is meaningless, I am forced to assume that the entirety of human language is meaningless, since I understand human language only in terms of its associations with the phenomena I collectively denote as “experiences.”

You’re going to have a difficult time expressing the notion of experience not existing with words that are intended to draw associations with phenomena that are grouped under “experience.”

Not necessarily. One can define the word that way if one wishes to make an argument out of nothing, but the notion of experience is not necessarily implicit (wikipedia, for example, states: “Thought and thinking are the processes by which these concepts are perceived and manipulated”). The word could apply to any machine that processes information.

Well, let’s take a random example from the list of cognitive biases on wikipedia (many of which are commonly referred to as ‘illusions’). How about the clustering illusion. This phenomenon could be applied to rather simple AIs, even those currently in existence. It could also apply to the perceptive characteristics of various animals, none of which are generally considered ‘conscious’. So while I agree there are some important semantics to be discussed here, I think you are pretty off-base with your premises.

ETA: also, I know this thread has gotten long, but others have posed the same questions you have, and I have answered them in some detail.

Ah, OK. I’d note again that that’s not equivalent to “Programs are not syntactic”…and that I somehow overlooked where you said exactly that:

:smack: If I were to take Searle’s position, I’d argue that. But I won’t, so I wouldn’t.

You would accept the appearance of a reply to this post from “me” as sufficient proof of the reality of the universe? But you aren’t (necessarily) conscious! Even if you were, that seems like an awfully flimsy reason to differentiate a hypothetical AHunter3 in a totally imaginary universe from a genuine AHunter3 in a real one.
I’m sorry but it seems to me you can’t have it both ways. You say that my direct experience of my own consciousness is insufficient reason to believe I’m conscious, and I believe you called my replies facile; but now you wish to assert that genuine existence of the universe, as opposed to the entire set of events & experiences merely being a hypothetical, is a sufficiently silly question to dismiss with “If you don’t exist you won’t reply to this post”.

I observe that AHunter3 replied to referenced post. I, a computing machine with no understanding of the word ‘conscious’, am programmed to symbolically record in a purely informational sense the fact that a AHunter3 exists as an alias for something with which I can exchange semantic symbols.

Did making the above statement about your existence require [computing machine] to be conscious?

I am seeing two recurrent strands in this thread that I want to pick at:

a) The elision between “I do not need to take your consciousness into account in order to explain you from the outside”, on the one hand, and “I do not accept that I myself am conscious”, on the other.

b) The elision between “I do not need to include your subjective experience as part of ‘That which is objedtively real’”, on the one hand, and “I do not need to include MY OWN subjective experience as part of the real”, on the other.

In both cases, your consciousness, which is your own subjective experience, is the only thing you do experience. You don’t get to experience something other than that as real; it’s all you’ve got.

In the first strand, let me say right off the bat that I do not know that YOU are conscious. I think that you are, I proceed under the assumption that you are, but my conclusion that you are indeed conscious (and that to be conscious, to you, is akin to what being conscious is for me) is a conclusion at the end of a complicated process of intuition and pattern-recognition and confirmed predictions and so on, all of which is ultimately an unprovable hypothetical in my own head. I cannot know that your consciousness is real and not illusory. You do not need to supply further argument about artificial intelligence computer programs and mechanical (presumably nonconscious) processes to convince me of that: I stipulate it. I can definitely be fooled about whether or not YOU are conscious.

That’s not relevant.

The point is, MY consciousness can’t be an illusion to ME. (And yours cannot be to you insofar as mechanical processes that mimic consciousness do not, by definition, have experiences and an illusion is an experience). I will now make another stipulation: the entire content OF my consciousness, the stuff that I think I am correctly and accurately conscious OF, could be a vast cloud of utter nonsense. Everything I think I am conscious of could fail to exist, or fail to exist as I (incorrectly or inadequately) understand it.

That’s also not relevant. I never claimed to have any other thought in my head that I could be truly sure of. I might be a supernova delusionally dreaming that I am a carbon-based individual organism on a small nickel-iron planet. But if I am, I am a conscious (albeit delusional) supernova. I might be 72,149 lines of code being processed in some ephemeral supercomputer-thingie, but if so I am a conscious code set, aware as I am being executed (albeit not aware of anything that is real and aware of a lot of things that aren’t real).

Because I am conscious, something real is. Something real does exist. The complete and utter entirely of it cannotbe unreal. I think therefore I am, as Descartes said. As I think he said. It’s the one and only solid beginning point in here. I am. I may not know what I am, but I am.
In the second strand, subjectivity and objectivity are notions that come out of Cartesian thinking. Yeah, same Descartes. I agree with him about the “I am” and depart from his line of thought not far after that. Subjectivity is not what we have been taught that it is. Neither is objectivity. What there actuall is is interactivity. All experience is the experience of a consciousness in interaction with something. There is emotion and sensation. All else is pattern recognition, the slow gradual and eventually complex buildup of a model of “real world” that contains, within it, a model of the self that is having all those experiences. That model is where you have the construct “objectivity” and the one called “subjectivity”, but they are facets of the model. We don’t get to experience either one directly; what we get directly is the interaction.

And that’s the only thing we get directly. We do need to include our own experiences of those interactions as part of the real. It’s what we’ve got to work with. There’s no other data stream available.

What you’re saying here implies that I don’t experience the tree, rather, I experience the experience of the tree.

Do you mean to imply that?

I’d prefer if you responded to my last post. But in the mean time I will respond to a part of your second post (but in spirit to the entire post):

Do you agree that a computer can “say” it is “conscious” without having “subjective experience”? It sounds like you do, because you admit that though you think I am conscious, you can’t be sure. I don’t see how you can’t apply that same concept to your own “subjective experience”: If a computer can “say” it is conscious outwardly, it surely can “say” so inwardly as well. If you can’t trust that I am conscious, how do you trust that you are? Because “you are subjectively experiencing it”? That’s what you tell yourself, but how do you know it’s true if “you” don’t exist beyond being a deterministic machine that tells itself what it is “experiencing”?

Telling yourself what you’re experiencing *is *consciousness. FFS.

I’m fine with that definition, as long as you therefore admit that some damned simple AI that is achievable right now is ‘conscious.’ FFS!

Show me such an AI. Not one that is merely algorithmically repeating its states, but one that has a spontaneous inner narrative. I’ll wait…

It sounds like you left something out of the definition you provided previously. A definition, BTW, which you stated with a FFS as though it were sooo obvious. “Inner narrative”, huh? How do you define that, specifically?

Well its an interesting idea, but I feel you still have some burden of proof resting on your shoulders. Your basically making a statement that equates to me saying that “a watermelon is blue on the inside until you cut it open.” Clearly I can’t prove this right, and you can’t prove this wrong. Thus I think that all this idea really boils down to is an interesting play on diction.