Anyone else who doesn't accept that they are conscious?

Like I said - all that shows you is the linear nature of task completion. But even there, there is parallelism. Just because there are (some well-known, well-documented) bottlenecks in cognition doesn’t suddenly render the whole process linear any more than, say, urine generation is a linear process just because I have one urethra.

So clearly you disagree with that as a definition of thought (not that it was - rhetoric, again) - which implies you have a different definition in mind. Care to share?

You first.

Your recent posts seem to demonstrate the “moving target” fallacy of argument. I have simply not said that the “whole process” is linear. Rather, I have responded to you saying that the exchange of “internal statements” is non-linear, and I tried to clear up any semantic confusion: “I thought you were describing a high-level process involving high-level language exchange, but it sounds like you are making a more or less trivial statement that the human brain is a non-linear machine.” Which you never responded to.

Depends on the context. Someone can count to 10 while snapping their fingers. Does the counting to 10 count as a “thinking”? I don’t know, nor do I really care – it’s just a word. When you refer to “thoughts” as “internal statements” I am assuming we are discussing human thought in the sense of an internal narrative that can be probed for linguistic retrieval by experimenters. If your understanding of “internal statements” is different, then say so. We can put this irrelevant semantics issue to rest.

Really? I covered “consciousness” quite extensively in this thread. As I’ve repeatedly said, the onus is not on me to define a word I’m arguing has no coherent definition (again, for pedants: trivial/clinical definitions aside). Which is why I ask you to define for me what you mean when you used the word and compared it to “thought”. See above for “thought”.

I wasn’t just making a statement about the brain in general, but also about the structure of internal narrative. Not “high-level processes”, whatever those are, but about how the brain talks to itself.

Yes

“thought” is as much “just a word” as “consciousness”, and you seem to care about that a great deal.

Internal statements =/= internal narrative. The narrative is an abstraction layer of all collected statements that rise to consciousness by succeeding in being written in memory (medium to long-term) and otherwise bubbling to the surface in external “statements” (by which I mean any semiotic act).

And this is not a semantics issue. It’s operational.

There has to be a definition/set that you think is incoherent. What are they? Ditto thought- or are you now arguing that there’s no coherent definition of that, either?

Consciousness is that abstracted linear narrative that the mind-gestalt writes, out of a wider range of inner statements, into medium- to long-term memory, forming a seemingly continuous yet actually rewriteable, malleable and fragmented spontaneous self-generated inner narrative, which serves as a model of identity for the mind to base simpler models of sentient percepts on. Basically, Dennett’s multiple drafts model.

Thoughts are any individual inner statements that the mind makes about other mental percepts, whether those be the objects of sense-perception (things seen, touched, etc), self-perception (emotional reactions), statements by other members of the gestalt (which can include any of the other modes), or results of internal modelling of either real-world objects or their modelled mental states (again, this can come in different modes). Thoughts serve as the statement stream from which the mind constructs the consciousness narrative, much as a blackmailer uses newspaper sentences.

I think I have made absolutely clear in this thread that I don’t care about the “word” consciousness at all (it has, for instance, a number of trivial/colloquial/clinical meanings which I have no interest in). I’m interested in whatever concept the word, as used in the philosophical context, is trying to convey. I have no idea what it is trying to convey. People define it tautologically (“consciousness” is “subjective experience”, “subjective experience is what I’m experience right now”). Which does not increase my understanding.

See above, or read the thread. People have repeated responded with their own definitions which are always either trivial (“consciousness is that which occurs in our brains”) or tautological (see above). The trivial definitions are uninteresting to me, because they entirely miss the point. The tautological definitions are meaningless or incoherent.

Thank you for taking the time to define consciousness for me. I think it is a good one. But my response is that it fits in the “trivial” category (above). Essentially you are saying that consciousness is “a complicated process inside the brain.” Does your definition admit “qualia” as a component? Does your model/definition contradict the zombie hypothesis?

This is a vague definition, and seems all-inclusive. How do you define “individual inner statements”? Any semiotic act?

How are they incoherent? What particular foundation statements are you using, that they are incoherent* in relation to* (as all incoherence **must **be)?

I’m saying it’s the result of a very specific process, the operation of which I outlined. How is that trivial? We know how it works, as Dennett explains in Consciousness Explained (have you read it?) Do you have a specific problem with the Multiple Drafts model?

I think, in my view, it is impossible to separate consciousness from memory. In a way, if it isn’t remembered, it isn’t conscious. Memory creates the illusion of continuity I mentioned.

Secondly, consciousness is always about *relationships *between objects, not the fact of the objects themselves - it has semantic content. One of those objects is always the self. Consciousness is a web of statements about relationships, with the self at centre. In a way, that’s all the self is - that object that all (essentially triadic) internal statements have as a common referent. It is related to the self-as-physical-object, but is not the same.

But I fail to see how any of that’s that’s trivial.

No. I don’t think qualia exist. I’m with Minsky in thinking qualia are a misguided attempt to reify a more complex internal process as one thing, and with Dennett in thinking the whole thing rather incoherent and breaks down if you refuse to be fall prey to “hasty simplification” by misleading intuition pumps, like the Mary the Colour Scientist one or the Chinese Room.

Not contradict so much as do away with the need for. If there are no qualia, then the zombie argument is meaningless.

Basically, in this, I’m a Dennettist, straight up.

No. I don’t see how it is “vague”, and it certainly isn’t all-inclusive. It can be defined by what it excludes - much brain activity is not of the “statement” form - the stuff that keeps your lungs pumping, that jerks your knee at the hammer, that floods you with epinephrine, that fills in the blind spot in the retinal image *before *it’s a percept. None of these are “thought”.

Anything that breaks down into one subject-object-verb/signifier-signified-significance relationship. In other words, any statement that has “intentionality” or “aboutness”, and syntax. Thoughts have syntax & referents, knee-jerks do not.

Any act that conveys meaning to another sentient being by signs, whether that be verbal expression, a directed gesture, an unconscious facial expression, which perfume is worn, how I choose to style my hair, etc, etc.

In your first post in this thread you seem to be in agreement on the incoherence of some positions. So I’m confused what on earth you are really arguing about here.

Rather than argue with you, I’m just going to say that it is clear you are not understanding what I’m saying. There is an enormous difference between speculating a complex mechanism that results in an entity saying it is conscious, verses whether that description has any meaning beyond which is trivial – the existence of a complex process that produces complex outputs. I probably cannot argue with you on the same level as you; it is clear you are better read than me on this topic. I’m not sure I have read Consciousness Explained. I’m fairly certain I did a decade ago, but not recently. Does Dennet use the word ‘consciousness’ to mean anything beyond the existence of a very complex process that can result in complex output including words like “I am conscious”? If so, what is it, if not ‘qualia’ or something equivalent to the tautologically-defined ‘subjective experience’?

I agree here, to the extent that I understand your definition of consciousness.

I disagree. As per the discussion about lookup-tables, etc, it is certainly conceivable that you could produce zombies that talk and act exactly like a conscious human, and yet “obviously” aren’t “conscious”.

But earlier you argued that “internal statements” occur in parallel in the human brain. I disagree according to your above definition, as per attempts to experimentally test the ability to hold multiple “subject-object-verb/signifier-signified-significance relationship” in parallel. But this is totally off the interesting topic (above).

And if you like, I’ll tell you what founding statements of mine they are incoherent* in relation to*.

That you aren’t saying what the idea is incoherent in relation to. Incoherent =/= trivial. As to “trivial”, see below.

You have a vastly different notion of “trivial” from me. The important parts of that idea are things you’ve glossed over - the entirely in-brain nature of it, the complete lack of need for any dualism, these are not “trivial” notions in the context of philosophy of mind.

Nor is that even the whole of it. The theory goes beyond just “complex process with complex outputs” - it outlines exactly what kind of complex process it is. It provides methods of falsification for itself. And it addresses problems with competing theories of mind. Just eliminating qualia and refuting the Chinese Room are big enough to make “trivial” a complete misnomer.

“Narrative centre of gravity” is probably the most definitive term he uses. It maps quite nicely to my “common referent”.

Conceivable to you. That discussion certainly wasn’t as conclusive as you seem to think. Dennett would say you suffer from a failure of imagination - there’s no way a lookup table system could pass for conscious - remember, I still say we’re dealing with a non-linear system.

Once again - task testing has nothing to do with internal statements. For every statement that makes it to memory (the surface), an unknown number of others have to be made and rejected (unconscious). We know this because of various mental tests that try and get at what happens when things get written to the brain, as outlined in CE. Multiple competing, interacting streams. Not linear.

Actually, I don’t think it’s off topic at all. It’s central to the MDM, after all, which is the model of conciousness I subscribe to and use to argue against you.

I’ll gladly admit that perhaps I’m just plain stupid. But could you please explain to me what on earth you are saying? What do you mean “in relation to”?

One synonym for “incoherent” would be “unintelligible”. I had no idea the term requires some “in relation to”. I’m genuinely confused here; I’m not being sarcastic.

If “narrative centre of gravity” is the term he uses, then I suspect there is nothing of any relevance to ‘consciousness’ that I would disagree with him on. I have no issue with any particular mechanistic model of the theory of mind. Any materialist model I am referring to as “trivial” because the existence of such a model is, to me, trivial. The model itself may not be trivial. But the existence of some complex model that describes the human brain is a trivial statement (I am assuming that “the entirely in-brain nature of it, the complete lack of need for any dualism” etc are “obvious”. They may not be to you, but they are to me).

As to what I was referring to when I started this thread – I’m afraid I’m going to have to keep repeating myself: I’m assuming that some complex materialist model like MDM explains the measurable aspects of “consciousness”, but I have no stake in any particular model (I am admittedly ignorant of them). As far as I can tell they are irrelevant to my point. Perhaps Frylock summarized the point best:

This feels purposefully obtuse to me. Do you reject even the extreme case, in which we simply train a large large enough decision tree (whether the #TB required is larger than the number of atoms in the universe is irrelevant) and have it pass the turing test?

I still don’t see the relevance of the lookup table thought experiment. If someone mapped out every conceivable response, based on some ridiculously large number of variables, even hypothetically, what does this prove? You seem to be saying consciousness can’t exist because we function deterministically. Why are these ideas mutually exclusive?

The premise would be that the lookup table has no “subjective experience”, qualia, or anything else beyond that which is objectively measurable. If human “consciousness” can be modeled by a lookup table, then we have a clear example of how, in theory, the fact that an entity can say it is “conscious” is not evidence for anything beyond some process by which such statements are generated.

Perhaps. But we exist in a world where such a lookup table is impossible, so this idea says nothing about what we actually consider consciousness to be. Maybe if we lived in a world where such lookup tables were possible, we’d actually be more sceptical about consciousness. But we don’t, so we aren’t.

Hmm… I for one have a lot of respect for the good ol fashioned gedankenexperiment… also I don’t think it’s obvious that such a lookup table is impossible in theory in our universe.

Brains are computers. They are made of molecules, which are made of atoms, which are made of particles… Just like computers. If they are not computers, they contain an unknown physical property, which has never been detected.

Searle is wrong. The man in the Chinese room doesn’t have to understand any more than any brain cell, or molecule that cell is made of, or atom that molecule… The means that the Chinese Room uses is irrelevant. If there is no discernable difference between a human and the room, they are equivalent in terms of conciousness. Searle implies the existence of something unknown that seperates a human brain from the Chinese Room, but cannot define what it is. That’s because there is no evidence of the existence of such a thing.

Given sufficient resources, a computer can model and emulate a human brain, doing anything the brain can do, and if the brain can achieve conciousness, so can the computer.

Computers are conciousness now. They can read input, make decisions based on the input, and produce output that relates to the input. They can use algorithms, or random events to vary the means and the results. They don’t have the range of inputs, memory density, pattern matching capability, or other aspect of human brains yet, but they will be there some day.

Back to the OP, conciousness is not an illusion. The illusion is the unknown, undefined, undetected difference between a brain and a machine.

Sorry. Since this is, in essence, a philosophical discussion, I assumed you were using “incoherent” in a more technical sense. If you mean “unintelligible”, well, I don’t think we’d get much more out of the debate since the word “consciousness” clearly isn’t that. Perhaps you mean “inconsistent”?

There’s whole libraries of philosophy texts that make it clear it’s not so obvious to most people. Just passing over the argument as “it’s obvious” doesn’t give you licence to trivialize it.

So it’s *qualia *you have a problem with, not consciousness? It’s entirely possible to talk meaningfully about consciousness *as a process *without any reification or whatever it is that you’re objecting to (I’m still not clear what that is - you say you have no problem with a materialistic model of consciousness, but still seem to have a problem with the concept itself?)

Yes, I reject even the extreme case. The Turing test will never be passed by just a decision tree alone. Consciousness is more than computation.

Marginally-relevant comic

I mean “unintelligible” (inconsistent also applies). You keep ignoring the fact that “consciousness” is just a word, and so therefore can clearly take on intelligible meanings. It sounds like the meaning you are attributing to the word is intelligible. However, as far as I can tell, others are not.

I also think it is obvious (as do many) that theism is nonsense. Nonetheless there are whole libraries of religious texts. In any case my goal with the use of the word “trivial” was not so much as to “trivialize” anything, but to make a useful distinction that I am still not sure you understand.

I’m not sure, but I don’t think so. It would actually by really helpful for you to reply to that post (I’m sorry if you did – I don’t remember you doing so).

Weird. I guess you take this as an axiom? I disagree with it, but sounds like we can’t go much further there.

<just joined discussion>

I don’t understand why the notion of consciousness makes some people so queasy (ironically). It’s not like neurologists are saying that consciousness is supernatural, it’s that they are saying a complete model of what consciousness is eludes us at the moment.
This should be exciting, but some are not happy with a phenomenon that cannot be described reductively yet.

And note that there cannot be an illusion of consciousness, since it’s subjective in the first place. If there’s an illusion, then that illusion is exactly as hard to explain as the “reality” of consciousness would be.

Also I note that there has been much confusion between determinism and fatalism within this thread. One difference being, in determinism your thought processes are not irrelevant; they’re how you make your decision.
No-one could predict what you were going to do without simulating your brain in its entirety*.

If I write a program that prints “ouch!” every time I tap a sensor, does it feel pain? Is touching the sensor now significantly different to, say, kicking a pile of dirt?
How exactly would I go about writing a program that feels pain, and how will I know when I’ve succeeded?


  • Or possibly seeing the future. Though for the purpose of this argument, that is still “simulating” your brain; it’s executing your brain program once, in a test run, then going back and delivering the results.

We are so far off topic.