More on Consciousness-- can we even say what it is?

Last night I had a troubling discussion which is merely one in a series of troubling conversations about consciousness and determinism. In my “normal,” everyday life I just sort of plod along, assuming consciousness exists and not really giving it much thought. But whenever stray thoughts lead me down the path to examine consciousness I am invariably struck by the sense that consciousness cannot be rigorously defined because it does not truly exist.

I end up taking a “strong AI” view of consciousness, where what we call consciousness is really just a super-complicated and perhaps sloppy algorithm, or sets of algorithms.

What’s worse, really, is that the presence of algorithmic behavior also leads me down a path of some form of deteminism… though not so in the strict understanding of the word, merely that there isn’t free will. It is almost like free will, to me, implies consciousness, while lack of free will implies algorthims. Thus I cannot even assume that free will exists to answer the question of “What is consciousness” because I feel the idea of free will is itself a container for the idea of consciousness (and so I would have a tautology).

Personally, I could care less whether I have free will or am conscious or not-- I feel the results are the same and the only difference is perspective-- but a friend of mine (the other half of the conversation) was adopting some serious depression over the thought.

So I want to try to define consciousness in a somewhat formal sense. I know, entire books have been written on the subject-- I’ve read two (Emperor’s New Mind, and GEB)-- but it really a topic that deserves some solid treatment on the SDMB where morality itself comes up a lot. As much time as we spend discussing morals, we spend next to no time discussing anything more abstract… is it really so unimportant?

Hell, we all assume here (for the most part) that we are conscious. What the hell do we mean by that?

Some interesting ideas that Hofstader mentioned was the idea of a sort of a universal induction principle. We see a stack of turtles, and say “the stack never ends,” even though we don’t look and look and look… stuck in a loop. The same was said of recursive definitions and so on… we don’t get stuck in infinite recursion, even though it seems-- by all rights-- that infinite recursion is present. What “knows” when to quit?-- Consciousness.

Hmmm, I said.

Penrose had a different view. As I recall, he mentioned the idea of non-computable systems to show that consciousness couldn’t be completely algorithmic, since there would be so many situations where we simply couldn’t act (he didn’t strongly argue this position, but he did mention it as a counter to the AI perspective). I disagree in some manner, but can only demonstrate why I disagree by analogy and not by design. He later mentions his idea of a quantum probability waveform collapser active in the brain which operates on (well, technically, reveals quantum information through interaction and participation in an event) the very nature of reality, keeping all events “larger” than a one graviton level in the seemingly classical order we see around us. (My recollection of this theory is a bit weak, and the book is not currently accessable right now, so any corrections by readers who recall better or differently would be appreciated). Thus, consciousness was a partially-algorithmic quantum probability waveform collapser.

HMMMmmmm, I said.

But, I sit here still. There are some terribly interesting ideas there, but can some dopers offer more insight of their own, or from other sources? Can we come up with an idea of consciousness that doesn’t need to assume that free will exists (that is, it can be contained in the idea but not used as proof of the idea, so to speak)? Or, quite simply, is consciousness merely the physical manifestation of our idea of free will, and that the very act of not assuming free will exists leads to a construct that cannot contain it[free will] in hindsight or as a conclusion?

sigh I hope this isn’t too unwieldy, and I know we’ve been over it before (and I’ve participated) but it is a topic that consumes much of my mental time, and is somewhat important to me. Apart from that, the WTC threads are starting to wear on me like election-time threads. Thanks in advance for participation.

I rarely post in GQ, but this is one I’m interested in.

I’m not sure that a lack of free will would mean a lack of consciousness. It seems perfectly possibly to me that determinism is true and we could be also be conscious of this fact. I think what you perhaps mean is that in order for us to have free will, we must be conscious. Could you clarify for me what you mean by the idea that consciousness is algorithmic?

The main problem I see with this is that it might be an explanation of what consciousness does (although I don’t think it is), but it is not an explanation or definition of what it is. This is an important distinction - are we trying to define what it is that conscious does, what purpose it serves, or are we trying to pin down what it is in essence? To use the fact that we can somehow rely on inductive reasoning despite all the problems in logic with it as an explanation of consciousness does not do either the problem of consciousness or the problem of induction justice.

It did the who with the what in the where now? I’m afraid I don’t follow you here. Are you saying that consciousness is a mechanism that interprets the world around us into a kind of mental-realm thing?

I think that the notion that consciousness is inextricably linked with free-will is a separate issue from the defining of consciousness itself. They may be co-dependent ideas but unless they are identical, there is something about consciouness that is not to do with free will. I think it’s this part that’s the important part if we are to have any grasp of what it is in itself. Leaving aside the issue of the possibility of consciousness without free will, I turn to Thomas Nagel and his essay “What is it like to be a bat?” for my preferred starting point for a definition of free will. The essay is mainly concerned with the link between consciousness and the mind/body problem, but it does offer an interesting definition: to have consciousness is to have something that it is like. To quote:

For example, since a brick does not have consciousness, there is nothing that it is like to be a brick. On the other hand, supposing that bats do have conscious experiences, there is something that it is like to be a bat.If there is something that it is like to be a certain creature, then there may also be something that it is like, for that creature, to be in some specific state. Eg, there is something that it is like to see a bottle of red wine on a white tablecloth, to feel an itch, etc. These conscious experiences have a subjective phenomenal character.

We lack the conceptual resources to even frame the correct hypothesis of what it might be like to be a bat because of the subjective nature of a bat’s conscious experience since, due to the limitations of our own point of view, it is beyond the limits of our imagination. This is becausee experience furnishes the raw materials of imagination and we have no experience of what it is like to be a bat. We do however have the conceptual resources to frame hypotheses about the subjective character of the experience of another human being - the accessibility or inaccessibility of facts about the subjective nature of another creature is quite sensitive to our point of view.

So there is a contrast between facts about the subjective character of experience and facts about physics or neurophysiology. The accessibility of physical or neurphysiological facts is not especially sensitive to our point of view however. Martians or intelligent bats could learn more about the human brain than we ever will. So phenomenological facts seem to have a property that physical facts lack - that is to say that the subjective character of experience eludes a physicalist theory of the world. (Hence why this argument is actualled called The Argument For Elusiveness).

I’m getting off-track. I think what I’m trying to say is that consciousness is a subjective thing and thus an AI explanation doesn’t quite cut it. There is nothing that it is like to be a computer. Having said that, I can provide no clearer definition - I’m as much in the dark as anyone else as to what consciousness might be in and of itself itself.

I shall go away and ponder the link between free will and consciousness and perhaps be able to address that question more fully later, rather than rambling as I have done. It’s interesting stuff.

Fran

Of course, I meant GD. Carry on.

“I used to think my brain was the most important organ in my body. That is until I realised who made me think that”

Ha! We lack the resources to even know what it is like to be another human, much less a bat, or other organism. Take a look at the thread, I see green. Do you see the same green? in GQ if you don’t agree.

I’ve never really understood the consciouness debate. Why is it so hard for some people to realize that they exist, and in realizing so, define the conscious moment? erislover says that consciousness cannot be rigorously defined, but we define it to ourselves every time we are aware of our own existence.

I’d just like to point out that there is such a thing as a non-deterministic algorithm, and therefore the existence of brain algorithms needn’t imply determinism. Whether this leads to what’s traditionally understood as free will is another question.

Thanks for your thoughts, Fran, but I’m going to chew on them a bit before I respond in kind. However, some immediate questions:

I do indeed mean that consciousness is implied by free will… almost that it is part of the definition, and so to presume the existence of free will sort of cheats me out of ever having to deal with consciousness. “Of course consciousness exists: we have free will, don’t we?”

Algorithmic consciousness is simply that: what we would call “consciousness” isn’t really anything more special than a series of formulaic responses to given inputs. It is an insanely complicated software program, for example, or a complicated hardware configuration, or a combination of the two. In the end, however, there is nothing about consciousness that cannot be broken down into an algorithm (so it would go by this explanation). We are, to put it very simply, machines in every way, and consciousness is no different. Note that the software/hardware issues are merely analgies used to explain the general case of applied algorithms, and it is not meant to be any more meaningful than that.

Yes, well, that may be a problem… depending on how we are trying to tackle an issue. In some ways, it makes little sense to state that what something is is in some way distinguishable from what it does, Plato’s Forms not withstanding. It would seem to me that “what something does” is a direct consequence of “what it is” and so the two should be interchangeable and merely a matter of perspective. Do you disagree?

Another thing which complicates the issue is the barrier that “the phenomenon” provides us in perceiving what something is, or what something is doing. I can only say, from a numerical stand, any errors which occur naturally in perception of an external event/object are compounded when the event/object is the conciousness itself… that is, we have one source of error present in perception alone, then another possible source of error in the conscious integration of data… but when the data is received from the consciousness itself we apply those previous errors to themselves again for bigger trouble.

I am often reminded of Godël’s First Incompleteness theorem (no formal system can be both complete and consistent).

Ah, this assertation is one that says the consciousness is an active component of reality, not a passive observer, as a somewhat direct consequence of the laws of quantum mechanics… that it, it assumes that quantum mechanics is in some sense objective, and that consciousness is one of the things involved in making reality what it actually is (that is, there is little difference between objective and subjective reality since the subjective consciousness is a player in the realization of the objective reality). Make any more sense? (rereads it: no, probably not :p)

ultrafilter, could you expand more please on the non-deterministic algorithms, or provide what you feel is a somewhat comprehensive link (unless they are terribly complicated things)?

Are you referring to non-computable systems, chaotic systems, or something else altogether?

“We define it to ourselves.” Don’t see a problem with that? A little circular, wouldn’t you agree?

Cosnider the universal: white. We all know what we mean by white. We can provide a somewhat rigorous definition of white. We can provide an even more rigorous definition of blue by referencing an exact wavelength.

Now consider the universal: consciousness. You say that you are conscious. You would probably say (please do) that I am conscious. You define it to yourself, yes, but it is also a quality I share. What is that quality? When you are the only conscious existent in the universe implictly or recursively defined things aren’t specifically problematic; however, if we are all conscious, what do we mean by that exactly (or as close to “exactly” as we can come)?

It isn’t a trivial question to me, even if it is to you. I do like to be able to explain myself to an almost arbitrary level of precision, and so many things hinge on consciousness without consciousness ever having any explicit meaning. This can only represent a logical nightmare and a definite source of possible error in all interpretations from morality to ethics to— well, everything, really.

sigh Now I am reminded of reformulating Euclid’s Elements such that “point” and “line” are explicite objects instead of intuitive concepts…

Oh, and Demo? It isn’t hard for me to “realize I’m conscious.” The problem is making sure that phrase has any meaning.

Fran again

I would disagree here somewhat strongly… it would seem to me that to be a brick would be the same thing as being a piece of drywall or a mound of dirt. If you were any of those things you wouldn’t be able to distinguish them… would you? Is this why they are not conscious? Somewhat dissatisfying…

Crap. I am NOT going to say something like “maybe rocks are conscious.” :stuck_out_tongue:

It would be more appropriate to say “randomized”, but I thought that “non-deterministic” would provide a nice contrast to what you were saying.

What I had in mind were algorithms that make random choices in order to solve a problem. Since I’ve seen you post on programming-related topics before, I’ll assume you have some familiarity with graph algorithms. There’s a paper by Karger, Klein, and Tarjan that gives a randomized algorithm for finding the minimum spanning tree of a graph. That’s a good example of what I had in mind.

The interesting thing about this algorithm is that, given the same input two different times, it’s not guaranteed to produce the same output (unless the graph has only one MST). I was hoping to draw an analogy to consciousness and say that people can do different things in the same situations. Of course, this is complicated by the issue of memory, but I was hoping to avoid that at first.

erislover, yes, I think we all “know what we mean” by white but I certainly don’t think we can provide a rigorous definition of it. Or, let me say, we can provide no more rigorous definition for “white” than we can of “consciousness”.

If I wanted to use a rigorous definition of consciousness to say that you were conscious, I couldn’t. I have no concrete understanding of another organism’s consciousness.

Truly, I don’t find that we can understand other’s consciousness more than we can understand how another percieves a certain color, hence my link to that thread above.

Now, that said, it doesn’t mean that we can’t know, with a fair amount of certainty, that others possess consciousness. What it does mean is that we must rely on an understanding that is as weak as our understanding of how others percieve a certain color, or a certain scent.

Also, I don’t think defining consciousness to ourselves is circular at all. For example: One has a discrete event that is a moment where one realizes one exists. That moment of realization serves as a definition of consciousness. I think it is very real and definitive.

One last thing… :wink:

Well, this is a classic case of where I just don’t understand the dilemma. Are you conscious? Yes? Then that phrase has meaning!

Sometimes it seems like people try to make it harder than it is…

Define “rigorous definition”.

(I’m only partially joking. What exactly are you looking for? A mathematical description of consciousness? What?)

For what it’s worth, I tend to agree with GEB that consciousness is just a form of self-reference within a symbol system. The more levels of nested, symbol-based self-reference there are, the more consciousness exists.

Thank you for explaining algorithimic consciousness. That makes more sense now.

Hm. This depends on whether we’re talking about consciousness as a thing in and of itself rather than a collective name for a group of concepts such as free will. I think I can tell you what something does without telling you what it is, although possibly this is a slide into semantics. For example, I can tell you that there is something that gets corks out of bottles. Is this what a corkscrew is, or is it what it does? Because I could argue that a corkscrew is wood/plastic and metal put together in a certain manner to form a certain shape. This could be an adequate explanation of what a corkscrew is in and of itself, without referring to its function at all. But I think this is a digression and I concede your point that when discussing abstract concepts the two are interchangeable.

Yes that does make more sense although I, like you, say “hmmmmmm”. I think that brings consciousness too far out of the realm of the abstract and into some kind of physical/metaphysical reality where I don’t think it belongs.

Demo

A consciousness of consciousness? I agree that this “I exist” moment can occur, but to define this moment as consciousness itself is problematic in that we are not continuously having this moment, and yet we are continually (more or less) conscious. Or if you mean that we ought to carry this moment of self-awareness in our memory as a definition of consciousness then we get into all kinds of trouble with the reliability of subjective memory.

I think erislover is looking for a more rigorous definitive than us simply “knowing what we mean” when we refer to it, hence the debate. And what Demo is saying is that there is no more rigorous definition. I disagree because I think we can provide a fuller account of the nature of consciousness, but what goes into this account is debatable.

Fran

Well, I certainly am not looking for an intensely logical definition, but one that would stand up to a bit more scrutiny than a simple assertation or an implicit definition.

Consciousness doesn’t need to be rigorously defined period but more rigorously defined than it currently is. Demo has defined it: what I am. Certinaly not a lot of meaning there without being Demo.

Which serves little purpose, IMO, as far as communication goes.

White: (see, light) anything that would provide for the (partial or complete)reflection of all colors of light, where each color is defined by its wavelength (within some tolerance) or a combination of wavelengths, and the resulting reflected components of light have not lost sufficient magnitude.

Can you define consciousness to even that level of precision?

You seem to back out of the discussion by positing that “white” and “consciousness”-- and likely all universals-- cannot be defined. You say:

Yes, the color thread is interesting (was it the one started by Scylla? in that one is a link to another interesting thread along the same lines), but all that accomplishes is saying that we don’t perceive similarly. That doesn’t mean we aren’t perceiving the same things, eh?

Consider a room with only a chair. You walk in, observe the chair, then a little while later explain to Dr.X that you saw a chair, and you describe it. I happened to walk into that same room shortly after you did, and I also happened to meet Dr.X later, and I also happen to describe the chair to him.

Now, from descriptions alone Dr.X does not automatically conclude that both you and I were referencing the same chiar. I mentioned its color much more vividly, for example, while you paid attention to its physical shape. As far as Dr. X is concerned, he just heard about two different chairs from two wacko fetishists. heh

By your logic, because we perceived the chair differently, Dr.X cannot understand that we have seen the same chair, and he cannot understand that chairs even have a useful definition until he looks at the chair, and even then he still cannot say that we have seen the same chair. In fact, should you and I meet in that room with Dr.X we couldn’t convince ourselves of it either (after all, we don’t share perceptions). Do you find this interpretation to be true?

I wouldn’t expect that we could always create a stricly rigorous and highly formalized definition of abstractions and universals, but that we could-- at least-- define it in some way that references some independent concepts.

ultrafilter
Interesting… I have my own “ideas” about how consciousness operates (assuming it exists :p) and that includes a weighted randomness, as a result of or analogous to quantum probabilities.

I’m not really interested in strict determinism, I only offered it because I lack a term for “no free will.” In do not believe in fate, but I am unsure about free will as well since (to me) free will requires consciousness and we can’t even (seemingly) define that.

BlackKnight, do you feel that any recursive definition or action is conscious, then?-- For example, a program which finds all the primes between 2 and some positive integer n? If this is the case, is there any particular reason to use the word “consciousness” instead of “algorithm?” (to you, of course)

Fran

:slight_smile: Exactly! :frowning: Almost… I think that free will must contain consciousness, but that consciousness doesn’t have to be inside of free will… one way implication. But it would seem that consciousness is possibly NOT a thing in itself but merely what Eris is to me: a mental and linguistic shortcut to describe a whole assortment of seemingly disparate ideas.

A problem with such a constructionist perspective, however, is that all sorts of things become conscious. WE have created so many machines that in fact do the same things we do: they perceive, they manipulate data, they share data… etc. Does it become a bald-man-problem here? It would seem so.

So does this lead us to believe that consciousness is a thing in itself?

Ugh-- that’s about as good of a definition as we have for consciousness! haha. :slight_smile: Perhaps if you referenced the shape more… the more specific you are in stating what something is the easier its seems to be to reason about what something does.

And it could indeed go both ways, even given strictly mechanical operations as a guide. We would know, for instance, the necessary strength requirements of the components, the size they would need to be (for example, a corkscrew must be a certain size to be able to pull out a cork)… as both definitions become more precise it seems that the distinguishing characteristics of them disappear.

God, imagine how long unabridged dictionaries would be then!

Fran:

Yes, I am saying that we carry that moment(definition) with us. Tell me, what kind of definition is not subject to the trouble associated with subjective memory? How else can we define an abstract thing?

Hey, that’s not fair! You can’t disagree and then say you don’t know what it is! Bring it on! :stuck_out_tongue:

erislover said:

But in the OP said:

So, what you want is a definition somewhere between logical and rigorous? :wink: I don’t know. I fear I’m not understanding what you’re getting at.

In your reply to me, you gave a textbook definition of white and then:

Of course I can: Consciousness is a sense of one’s personal or collective identity, including the attitudes, beliefs, and sensitivities held by or considered characteristic of an individual. Furthermore, it is the state of being characterized by sensation, emotion, volition, and thought. (Cannibalized from a few dictionaries. ;))

What’s missing?

How is that backing out? You yourself say:

But in the OP you asked for a rigorous definition of an abstraction: Consciousness!

As for the Dr. X-chair analogy, yes, I do find that interpretation to be true if you want to define the chair using a rigorous definition.

Frankly, erislover, I don’t think you really know what you are looking for. You start by asking for a concrete answer to an abstract question, then backpedal and ask for a definition “that references some independent concepts”, all the while spewing rhetoric and thinly veiled condescension. Did you post this because you were honestly seeking discussion? I don’t see where you’re going with this.

Maybe you should have posted your question in GQ…

“Consciousness is a sense of one’s personal or collective identity, including the attitudes, beliefs, and sensitivities held by or considered characteristic of an individual. Furthermore, it is the state of being characterized by sensation, emotion, volition, and thought.”

This seems circular to me. What is doing the “sensing”? Consciousness. What is sensing “the state of being”? Consciousness.

Well, sheesh Demo, I can look things up in dictionaries :wink:

Well, I’d like to have a definition which isn’t formal but is still definitive. You go on to say,

This is merely pushing the responsibility for understanding an abstract concept onto other abstract concepts.

What do I want? A definition I can work with. It doesn’t need to be final and absolute, but it needs to be sufficiently complete to allow using it to derive concepts that depend on it. To say it encompasses collective identity or sensation is starting down the right path. Would you consider a computer conscious? It can reference itself (by, say, flashing its own BIOS). It can sense magnetic variations in a hard disk and ridges in an optical disk.

Emotion? I would prefer to derive emotion as a consequence of consciousness, not included in it. Do you feel it is impossible to do so?

Am I being any more clear? Its a tough concept to begin with, and explaining what I am expecting when I am not expecting a strictly formal definition is, admittedly, a bit of a quagmire.

Yes, I am asking for a more rigorous definition, but I’m not looking to be a Bertrand Russel wannabe and write “Principia Sum” or something.

Surely you see there isn’t a simple dichotomy here… there is a range of thoroughness with which we may define things, including abstract concepts.

I am looking for a more thorough definition of consciousness which can possibly be used to derive some things that seem to follow as a direct consequence from consciousness: things like emotion, an ability to not fall into infinite loops, and sensory integration. Do you feel such a definition exists?

I, for one, don’t.

Or rather, if such a definition is made, whether or not it has any correlation to reality is untestable, therefore non-falsifiable, therefore angels on pins.

The problem here is, consciousness may well be solely and only an epiphenomenon–in which case, nothing is a direct result of it. Action isn’t, as will is just an illusion–an interior explanation made up after the fact. Thoughts and emotions? The experiencing of them happens after the fact of brain activity–an effect and not a cause.

Nothing a result of it, as consciousness itself would be merely a disconnected effect, a dangling thread.

Do I believe in that view? Not particularly. I have different dances I like to believe the pin-angels are doing; instead of a somber waltz, rather more similar to that insane (and I suspect horribly translated) “Taoist rap” sequence in A Chinese Ghost Story. Which is to say, as far as I can tell it comes out of nowhere, what sense it appears to make is a very fleeting thing, but damn if it wasn’t entertaining.

I said earlier, “… consciousness is just a form of self-reference within a symbol system.”

I should have said, “… consciousness is just a form of self-reference within a physical symbol system.”

Therefore, I don’t believe definitions or actions of any kind are conscious. Actions are not physical, although what causes the action may be.

By the normal meaning of consciousness, no.
By the definition I gave, yes, slightly.

I say “consciousness” instead of “algorithm” for the same reason I don’t say I’m following an algorithm when I bake a cake even though that would be accurate.

I will clarify my opinion on this matter (and it is an opinion; I don’t think anyone can claim very accurate knowledge of such things). Let’s say a physical symbol system (PSS) exists that performs a simple task when “run”. (This could be a program on a computer, or something else entirely.) It does not contain any self-reference. By my definition of consciousness, this PSS is not conscious. I suppose you could say it has a consciousness level of 0. Now let’s consider a different PSS, one that contains “rules” which can change the PSS. I would say this has a consciousness level of 1. Now imagine a PSS that has not only rules for modifying itself, but rules that govern how to change the rules for modifying itself. This has a consciousness level of 2. And so on.

Lord knows what level of consciousness the human brain is at. I would guess it is much, much greater than 2. :slight_smile:

Definitions are definitions: they aren’t false or true.

a=1 is this true or false? Is it falsifiable? :wink:

I want a definition that I can use in order to draw some conclusions, but each time I strive to find such a definition I am stuck in a mode where conciousness is nothing special; just another algorithm.

blackknight, I am not sure what a physical symbol system is as opposed to a regular symbol system.

Definitions are neither false nor true when they’re treated in isolation. In reference to reality, they most certainly can be.

If you’re trying to draw a conclusion that hinges on solving the equation a + 20 = x, where x will be a number that has real world importance and is known or can be tested, it most certainly is. If x has nothing to do with the world, then nope, it’s not.

So, what “conclusions” do you want to draw from one? What conslusions do you expect to draw from it? If it’s only conclusions that aren’t testable, you can pick any definition you like. If you want conclusions that are testable, that’s a different story.