Anyone else who doesn't accept that they are conscious?

The burden of proof is only on my shoulders if you appeal to popularity. And your watermelon analogy is ironically more apropos to my own argument than your own: the use of the word consciousness is like calling the inside of a watermelon blue. There is certainly no objective sense in which that statement is coherent; the color of an object is generally defined by the wavelengths of light it predominantly reflects or does not absorb, and so asserting it has some “color” when not reflecting or absorbing light is flatly at odds with the definition. Full stop. That it, of course, unless you assert some incoherent and meaningless subjectively-defined “color” into the watermelon…

I gave a definition? Really? I thought I made a rhetorical declarative statement.

It is sooo obvious.

As stories (“a collection of language utterances with a unified theme and sequential structure” for those who want to take pedantry too far) where the subject, the object, the audience and the narrator are the same, the person within whose brain all the utterances take place.

Your quote:

[emphasis not mine]

Looks like a definition to me. But OK, if it was a rhetorical declarative statement, I’ll assume you were simply choosing to arrogantly state an opinion without attempting to make any substantive argument. Good job, and thanks.

To wit:

BTW kids, this is a good example of how to really engage someone in a fruitful debate on a topic each of them has thought about deeply.

This is actually a good definition. One I would agree with; I fail to see how this definition is in any way in disagreement with my position.

Let he who is without snark cast the first stone…

Because in your computer set-up, the individual doing the telling isn’t the computer itself, but whoever does the original setup of the algorithm.

This has come up a couple of times. I would like to point out that appeal to popularity is only a fallacy in deductive logic. As the Wiki article cited acknowledges, it isn’t a fallacy in inductive logic. Thus, pretty much everyone but the OP is comfortable concluding consciousness exists because, first, we experience it, and second, everyone else reports experiencing the same thing. It’s the same way we conclude that dreams exist.

This goes without saying. Inductive logic, while it does have the word ‘logic’ in it, is not particularly credible in the context of this argument. Valid applicability of inductive logic would be “9 out of 10” people say they experience dreams, therefore it is likely that the next person we poll will say they experience dreams. Inductive logic would not be applicable to the question of whether dreams really exist (or whether humans have some cognitive trait that pushes them to say that they have dreams). The wikipedia article makes this clear, I think, if you read the whole thing.

If you start with the premise that you experience consciousness, then inductive logic is applicable to the question of whether others experience the same thing. However, I reject your premise. Inductive logic is applicable to the question of whether humans say they experience consciousness in this context.

By my own definition of consciousness, I was not referring to my counter-example (which was in response to your “rhetorical declarative statement”). I’m going to keep referring people to my post #102 which if I remember correctly no one has responded to. The definition you provided of consciousness if in agreement with my position in that post, and in agreement with my position in this thread. But if you disagree, please respond to the post.

I disagree, because in post #102 you do not deny that people, in saying “I am experiencing consciousness,” are referring to their own cognitive processes. The question is, what more do you think is required for consciousness? To me, this is consciousness, and on a level no current AI (despite your claims) can replicate. If you disagree, show me the AI that does this. Not “there could be”, show me the existing system, please.

Exactly… and that’s why nothing you said is in disagreement with my position. I am completely at a loss for what you are disagreeing with here. For the record: of course I agree people are referring to their own cognitive process. I don’t see how that is relevant to my assertion that the term “subjective experience” has no meaning or purpose, beyond being (as I have said many times in a myriad of ways), a synonym for, to use your words, “referring to their own cognitive process.”

This question is irrelevant; the term ‘consciousness’ has no non-trivial meaning to me (as I made absolutely clear in the post I referred you to). I am making no claims whatsoever with regard to mechanistically replicating ‘consciousness’, except as a counterexample to those who supply a misleadingly simplistic definition of it (as was in the case of your ‘rhetorical declarative statement’). As I have said, I don’t believe ‘consciousness’ exists (beyond as a word used to refer to the process, observable or not, by which we supply words to our hands and mouths), so clearly it makes no sense to ask me to show you an AI with ‘consciousness.’ If, on the other hand, you do define consciousness as something simplistic like “referring to their own cognitive process”, then surely you don’t disagree that an AI can refer to its own cognitive process… in an informational an algorithmic sense such an operation is trivial.

I should add, that I in no way am intending to say that the human mind is ‘trivial’ or that an AI to replicate it is an easy task. The human mind is fantastically complex, and getting a program to pass the turing test is hard (although not that hard – it’s getting awfully close). As I have alluded to before – and this is just the simplest of examples that does not alone do justice to my point – a large-enough look-up table alone can replicate the consciousness of everybody who has ever lived, including those who overflow with statements like “I am conscious”, “who are you to deny subjective experience”, etc, etc… including each and every point each of you has made in this thread. That isn’t persuasive to you, perhaps because you “are obviously experiencing consciousness right now”… and yet if you were a look-up table you would say the same thing (by chance determinism, or if such a common statement to that effect were relevant to the evolutionary fitness of the look-up table).

Sorry I have to continue my last post because I my ‘edit’ time is up. Above I said ‘replicate the consciousness of everybody’. Implicit it should be understood I am referring to an outside observer here. But in addition to a look-up-table-based-AI being able to make external declarations like “I am obviously experiencing consciousness right now”, it can do so inwardly as well: if you provide an ability to parse symbolic language of the outside world (already done to varying degrees of course), the same parser can be used to read the output of the look-up table that would otherwise be sent to the outside world. In other words, it is not difficult to imagine an AI ‘telling itself’ it is conscious in various ways, without there being any ‘subjective awareness’ beyond the ability to parse a look-up table (the ‘parsing’ could be simple or complex, perhaps involving some complex associative memory).

No. It is not possible to pack such a look-up table within the limited space of the human skull, and the speed of light would prevent accessing such a table fast enough to carry on a conversation, even at message board speeds. Whatever humans are doing when they believe they are conscious, it must be algorithmic in nature, not indexical.

In other words, Borges’s Library is not a good model for how novels are written.

Man people, you’ve got to give me some credit. I wasn’t at all proposing that the human brain is a look-up table; I certainly didn’t intimate such a thing in the quote you responded to.

And I’m saying that not only is the human brain not a look-up table, it’s physically impossible to model the human brain with a look-up table. The number of entries required for such a table is larger than the number of particles in the universe. It’s not a useful metaphor.

This is simply wrong. First of all, what I said was that the external output of every human who has ever lived could be simulated by a look-up table. This is vastly different from saying “every permutation of words up to length X in the English language.” Secondly, using a model similar to chatterbots like Alice suggests that the look-up table could be stupefyingly small. As small as 10k entries reproduces 99% of English conversion. Most ‘conscious’ humans are (perhaps) surprisingly predictable. And no, I am not making a metaphor at all. I am simply giving an example (simplistic, yes, and I clearly and openly admitted this explicitly) of a theoretical scenario in which machines that we could all agree on are not “conscious” (the reason why I chose this particular example – it’s very useful for this purpose) could nonetheless call themselves “conscious.”

First, the post to which you were replying (#280) was making an inductive argument, so it was fair to point out that your claim of fallacy wasn’t well founded. More to the point, pretty much everyone who has responded to the OP has relied on an inductive argument similar to the one I made in Post #285, which presumably is why you you were invoking the fallacy. Again, it doesn’t apply.

Second, one can make objections to an inductive argument, of course. And, yes, I “get” (I think everybody gets) that you really, really don’t like the fact that consciousness is subjective. But, this doesn’t mean induction applies only to the objective. We use induction (science) all the time to infer the existence of things we can’t observe directly. FWIW, if you want to understand a technical subject like logic, you really should try to do better than Wiki. The Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy, for example, are much better resources. I say that, BTW, as someone who is by no means a trained logician. Just another amateur passing along a tip.

Third, I suggest you step back and consider whether this argument isn’t better framed in a positive way. Rather than trying to deny the existence of consciousness - to which everyone who has responded says we don’t find the assertion credible since we experience it - cut to the chase and make your claim, that consciousness is nothing more than a part of the processing of an organic computer. You can even argue that consciousness isn’t “special” (especially, not evidence for dualism), that it’s deterministic and whatever else you like. But you would eliminate a distraction which has derailed the main point it seems you’re trying to make. Framed this way, you’d probably get a lot of takers.

Close enough… you experience yourself in relationship to the tree. You don’t experience the tree other than as you interact with it. You experience what that particular treeness is to you, at at that time.

Say it to whom?

I’m directly and personally me. I’m conscious of my consciousness. Yours is merely intuited from external clues and the application of logic.

Which me telling what self?

Are you perhaps implying that if I am caused (that I am deterministic) and that I can be described in some fashion as the sum total of my functional parts (a machine of some sort) that it means less for me to be conscious? That would seem to be a 2nd tier argument. I do not, in fact, believe that determinism is the only accurate way of looking at why events occur, but I don’t believe determinism is wrong, either.

Out of genuine curiosity, how is post #280 making an inductive argument?

Again, I don’t agree. But the point I tried to make, which you didn’t respond to, was that when you use inductive logic in your phrase “we experience it, and second, everyone else reports experiencing the same thing”, I assign a prior probability of zero to your assertion that “you experience it”. As you point out below, this is totally derailing the interesting argument, so we can choose not to dwell on it.

And, incidentally, I use it all the time as well. Bayes’ Theorem is integral to my field of work, and yes, its use can be extremely controversial among experts. And there are important but subtle distinctions to be made, for instance, the difference between “the probability of A” and “the probability of the reporting of A”, is extremely important when it comes to unfalsifiable hypotheses like “consciousness” (as some would define it). Hell, it’s not like we are measuring the mass of a particle here. What people are arguing for has no apparent properties: they can’t describe it, they can only repeat that it is something that exists. In what scientific field would anybody be able to get away with that?

I agree that the link you’ve provided is infinitely better than the wiki. I as a rule always link to Wikipedia when possible, for consistency, in order to avoid the appearance of selective bias. Perhaps this is not always the best idea.

OK, but the argument I’m making is stronger than just “consciousness is nothing more than a part of the processing of an organic computer”. But sure, if you think it will help (although I sure as hell thought I made it clear in my OP), let me state:

Consciousness is nothing more than a part of the processing of an organic computer.

But I go further. In the above phrase, I find the term ‘consciousness’ superfluous. I would rather dispense with the term completely. Color, which some have compared it to, has describable properties: colors can be distinguished, and they describe the wavelength of a range of electromagnetic radiation. Consciousness has no such properties. Pain, or taste, or fear, these are other subjective things one might compare ‘consciousness’ to. But again, these have properties which are measurable, whose gradations can be studied and induced. Again, not so of ‘consciousness’. There is no way of knowing whether object A does or does not have it. And if it states it has it, it is not falsifiable, even if it is simply a look-up table, or a “hello world” program. Still not falsifiable. The term has no use. But furthermore, I don’t believe it has any meaning; I don’t see how there is any reason to believe it refers to anything real. If nothing else, apply occam’s razor in the context of the fact that an organic computer isomorphic to a look-up table could exist that is indistinguishable from ‘conscious’ human.

To yourself, of course. Where ‘you’ are a parser. You parse language coming in. You parse language coming out. You parse language that is stored or generated inside your own head.

Sort of, except that I am saying that it implies that ‘conscious’ has no non-trivial meaning. One way of putting this is as follows. Suppose you are right: ‘consciousness’ exists, and it is experienced by a typewriter. Now, the typewriter has no brain, obviously. It is simply the receptacle of words being typed into it. However, as the words are typed into it, it “thinks them”. It “experiences them” as “thought”. Now, suppose the typewriter is the receptacle of the phrase “I am conscious.” Since the typewriter had no control over what was typed into it, the phrase is not self-consistent; it is meaningless, and irrelevant as evidence for anything.