What Is Consciousness?

HMHW,

“But if that’s true then it either means that the rock is conscious when looked at in a particular way—which is a strange conclusion:” HMHW

‘Looked at in a particular way’ is the essence of your argument. You have presented a set of definitions that create consciousness in a rock.

Please explain for us simple engineering folks (this simple engineering folk) what you mean by mapping and what states you are observing. In my example above, I monitor the states of the address and data lines of a functioning processor. I create a table of addresses and data codes. I ‘map’ the binary instruction codes to program syntax. I then attempt to order and understand the functioning of the physical computational machine. You say this is not related to your process.

OK, then what are you doing and how does it not add information to the process?

Crane

I don’t think we did agree that in the earlier discussion. I did say that the closest thing I could imagine to a philosophical zombie of this kind would be a human-like being that was simulated by a highly competent AI or other entity of some sort that could model all the processes required to create a fully-convincing human-like responses to the environment. A highly competent entity of this kind need not be conscious or self-aware in any way that we could recognise, but nevertheless it could simulate a human with true consciousness.

If you were to talk to one of these efficiently simulated humans you would find that they belived sincerely that they were conscious, and behaved exatly like they were conscious; the only difference would be a wholly undetectable quality which we would have to label ‘true consciousness’, which seems to be entirely illusory.

After all the Universe itself seems to be able to create fully conscious beings without being conscious itself; I’m not suggesting that the Universe is necessarily a simulation, but there is no compelling reason to suggest that it is not.

Also, a physical system that is capable of consciousness need not always be conscious ie humans are not conscious when anesthetized, in deep sleep or in a coma. Consciousness is only one possible state of the system

Belief may be the boundary of consciousness. An automaton is not conscious because it claims to be, but if it is capable of belief, it is conscious.

Crane

HMHW,

Consciousness is an attribute of human beings that is a measurable physical phenomenon. The EEG patterns associated with consciousness are know and quantified. Consciousness in humans is known and measurable in the same sense as electricity. We can measure and control electricity without fully understanding it’s underlying nature.

All known physical systems exhibit predictable EEG patterns when in the conscious state
All rocks are systems that (do not exhibit EEG patterns)
.:Therefore all rocks are systems that (are not in the conscious state)

Of course it may be that we have yet to observe the correct rock, but that was not a requirement of your hypothesis.

And, if your hypothesis holds, then:

All physical systems can be mapped to a conscience state
All dead humans are physical systems
.:Therefore all dead humans are in a conscious state
Crane

HMHW,

The Putnam/Searles hypothesis yields a false result (dead humans are conscious), so it must be based on a false premise. It appears that your premises take the form of maps, so one of your maps must be faulty. Your conclusion takes the form of the ‘All Crows are Blackbirds fallacy’, so there must be a map that includes ‘all’ physical systems where it should only include ‘some’ physical systems.

Putnam’s rule, that I believe you have restated, is that the observed states must be of the form that for each state ‘p’ there must be a necessary consequent state ‘q’. This is the case of all deterministic computers, like a PC.

Deterministic computers are capable of running stochastic software which creates (maps to) a stochastic system. In stochastic systems ‘q’ is not an invariable consequent of any state ‘p’. Stochastic systems therefore are a class of physic system that is outside of the Putnam/Searles definition. Systems that appear conscious are stochastic, so are not included in the Put… definition. That’s the mapping error that results in the false conclusion. The table assumes ‘all’ where it only contains ‘some’.

Examples of stochastic systems:

One of my UAVs (model airplane) decided to make an unscheduled turn and head for Mongolia. It continued until it crossed a deterministic barrier and returned.

Bosch demonstrated a fully automatic car that used computer vision to drive on the autobahn. It had an unexplained attraction to yellow cars and an unexplained avoidance of blue cars. It went along fine until it ‘saw’ a straight, snow covered path that it preferred to the highway and turned onto it.

A characteristic of stochastic systems is that they make mistakes (take non-deterministic actions).

Crane

Is it possible to prove consciousness exists without first being conscious yourself? Can the experimenter truly remove themselves from the experiment? Or as the Eagles said, someone show me how to tell the dancer from the dance…

Can someone perhaps dumb down the “rock” argument so I can comprehend it?

To my understanding, rocks don’t “process” anything. They don’t do anything. (Well, okay, they slowly erode.) How does a rock emulate a binary counter, say: a simple automaton that counts from 0 to 111,111,111 in binary, and then starts over. I don’t see any possible way a rock can do this.

Perhaps consciousness is an illusion. Kind of like time. Time only means something to human beings. For the rest of the natural world, things just happen. They aren’t broken down into quantifiable bits on the space time continuum. Maybe that’s how it is with consciousness, and the confusion comes from trying to quantify it.

I think some animals have a comprehension of time that is somewhat similar to ours.

Cats certainly have a VERY strong notion of when its time for them to be fed!

If consciousness is an illusion, who is being fooled?

<disclaimer: I stole that line from someone earlier in this thread>

Trinopus,

Yeah - anytime.

Crane

I believe we need to make a distinction between the state of awareness that allows systems to be aware of, and interact with, the environment (hungry cats) and systems that further manipulate sensory data in a delusional manner (Jesus on a Cheeto, Conscious rocks).

Crane

I believe for a cat, as for all animals, the time is always Now. Cats are just very well equipped to read their master’s—er—slave’s cues. And of course animals are aware of the length of the day more or less. That’s how birds know when to fly south. But 7:36 am for example means nothing to them.

Ive been thinking the same since I started to follow this thread, but also feel the discussion could use a distinction between “regular” consciousness, and “self” consciousness. And perhaps that phrase “free will” will pop up again…:cool:
Probably dating myself, but Frank Zappas song from the seventies " Help Im a Rock", always made me think of this subject…it has minimal (repeats the song title) lyrics, but a really unusual time structure…

There are some folks who feel that matter itself has consciousness, there would be a LOT of ghosts in the machine running around…or just one BIG ghost!! Im agnostic on most everything, for the record…

Jupe

Originally, I didn’t think I could be any more clear about what I mean than I was in my reply to eburacum45. However, Trinopus proposed a good example:

I’ll use this to walk through the argument, as explicitly as possible, being as clear as I can about any mappings involved, and so on. I’ll modify the example to an automaton that counts from 0 to 11 for simplicity—otherwise, my already quite verbose posting style might well hit some kind of singular collapse situation. But of course, no generality is lost in doing so.

So we have an automaton that has three states: A, B, and S. I will leave out details about the transition rules, and instead, just focus on the evolution of the automaton, by which I mean the sequence of states it cycles through.

At any one time, the automaton is in any one of its available states, either A, B, or S. Any possible computation is a sequence of states it cycles through (according to its transition rules), which we can represent by a string of the letters we’ve assigned to its states, as in {A,A,S,A,B,S} or something like that. This assigning of letters to states is itself already a kind of mapping between the physical states of the automaton and letters of the alphabet—one might imagine, as a physical realization of the automaton, a lamp that can flash in either of three colors, say red, white, and blue. In a modern PC, for example, these physical states would be the content of the working memory at any point in time, the pattern of bits set and not set, and quite obviously, we’d need a bit of a bigger table—but the principle is the same. In a brain, this state might be the pattern of neurons firing (though things are probably a bit more complex in reality).

So our first mapping is:



f[sub]1[/sub]: colors --> states

Color of Lamp | State
      blue    |    A    
      white   |    B    
      red     |    C    


This is the form that all our mappings will take: they just give specific meanings (the entries on their right) to the elements on their left side.

Now, we’ll outfit the automaton with some form of semantics in order to talk about counting. So let’s say we want it to spell out each number, indicate that it’s done, then start with the next number. Since we’re in binary, we need two states for 1 and 0, and since we want to know when a number is done, we’ll assign a special code sign to indicate ‘number is over’. This gives us another mapping f[sub]2[/sub]:



f[sub]2[/sub]: states --> semantics

State | Meaning
   A  |   0    
   B  |   1    
   C  |   S    


Where the ‘S’ stands for ‘stop’. We can, if we wish, compose these two maps, to create a new map, f[sub]3[/sub] = f[sub]1[/sub] o f[sub]2[/sub] (I use the ‘o’ to stand for ‘composition’, or ‘execute first f[sub]1[/sub], then f[sub]2[/sub]’), whose table is:



f[sub]3[/sub]: colors --> semantics

Color of Lamp | Meaning
      blue    |    0    
      white   |    1    
      red     |    S    


We can either use this map in order to evaluate the automaton, or think about it in abstract terms, say, noting down its evolution in terms of A,B, and S, and then translating that to binary numbers (which of course will be translated further to decimals, since most people aren’t used to thinking in binary). This is, really, all the maps are: translations, nothing else. Without them, the automaton still does what it does, but in a strange language, so to speak.

OK, now what does the automaton do? Well, if it functions correctly as a binary counter, it will execute the following evolution: {A,A,S,A,B,S,B,A,S,B,B,S} (which I could have alternatively written in terms of lamp colors). This could now be translated using f[sub]2[/sub], or alternatively, we can use our knowledge that S is the stop codon, and break the whole evolution down into sub-sequences, considering a mapping on those:



f[sub]4[/sub]: sequences --> (decimal) numbers

Sequence of States | Meaning
      {A,A,S}      |    0    
      {A,B,S}      |    1    
      {B,A,S}      |    2    
      {B,B,S}      |    3


Using this mapping, now, we can understand what the counter does: it counts from 0 to 3. With the flashing light in hand, and the requisite maps, we can decode—translate—its behaviour into an intelligible computation. Exactly the same thing is done when our monitors, say, decode the state of the PC’s memory into a picture: this is the essence of computation—a physical system evolves a certain way, and that evolution, translated into something human-intelligible, consitutes a computation.

OK. Now for the rock. As a physical system, it evolves a certain way—it’s a macroscopic system, so it won’t be in the same state twice over a given period of time (lower than its Poincaré recurrence time, about which, don’t worry). By state, here, I mean its microstate—the arrangement of molecules, or at whatever else level of resolution you want to observe the rock. Even a rock, if you go to a fine enough level, is dynamic, and we can imagine using some powerful microscope to observe these dynamics (yet we don’t need to get bogged down by quantum mechanical considerations).

So, the rock has dynamics, which means that in a certain span of time, it will traverse certain states—an alternative to the microstate picture would be, for example, to just choose a long enough span of time such that wear and tear will become observable macroscopically. Anyway. We simply slice our timespan in some convenient way, or choose our states such that the rock traverses through twelve distinguishable states, which we term creatively r[sub]1[/sub] to r[sub]12[/sub] (this again being a mapping of physical states to their names, which I will not spell out, however). The rock traverses these states in order, producing the sequence {r[sub]1[/sub],r[sub]2[/sub],…,r[sub]12[/sub]}.

Now, what we need is one last map, which associates state of the automaton with states of the rock. Here it is:



f[sub]5[/sub]: Rock States --> Automaton States

        Rock State       | Automaton State
   r1 v r2 v r4 v r8     |    A    
   r3 v r6 v r9 v r12    |    S    
   r5 v r7 v r10 v r11   |    B    


Here, ‘v’ stands for the logical ‘or’: the states of the automaton are mapped to disjunctions of states of the rock, such that if the rock is in one of the states in the first disjunction, then this is interpreted as the automaton being in the state A.

Now, with this table in hand, and the table mapping automaton states, or automaton sequences, to numbers, we only need the rock, performing its natural, physical evolution, which is then translated into counting from 0 to 3—the rock performs the same computation as the original automaton.

For those who wish to object that, basically, the answer already lies in the last map, note that this is only an artefact of the straightforward mapping between automaton states and rock states I’ve used, and the simplicity of the program being executed. In the general setting, knowing the map alone won’t suffice to know the result of the program.

The crux, at any rate, is this: just as you can translate the evolution of some automaton into a computation, using the right translation (‘map’), you can translate between the evolution of a rock, or in fact any given physical system, and any computation you want to have performed. If it’s now true that this is just a translation, then it follows that the computation must in some sense have already been ‘in’ the system—certainly, looking things up in a table should not have any physical effect on the system whose states you are looking up; at any rate, it would be strange to suppose that what computation a given system carries out should depend on the translation you use in order to understand it.

But then, the conclusion must be that if consciousness is just a computation, then every physical system implements this computation—just as it implements the computation of the square root of three, or of the digits of pi, and so on; all these can be brought out by appropriate translations.

Which brings us to:

Yes, exactly: that is what the argument is intended to accomplish. It’s a reductio ad absurdum—it leads to an absurd conclusion in order to show that one of its premises must be wrong. This is why Putnam and Searle drew up the argument. (I’m not sure why you consider the consciousness of dead humans to be more shocking than the consciousness of stones, but anyway, I’m glad we’re finally on the same page here.)

But this:

Is wrong, unfortunately. The premises of the argument are not maps; they’re used in the argument, but they’re completely inoccuous. Certainly, you must demand that those maps exist, but as I showed, they’re just tables with specific entries—so as long as you allow me to draw up arbitrary length, but finite, tables, any map in the argument must exist at least in principle (and since it’s an argument that turns on principles, that’s all it needs).

Rather, the premises of the argument are as I pointed them out to you some responses back: the computational nature of the mind, and the fact that the maps don’t interfere with the computation. So it must be either of those that are rejected; Searle and Putnam argue for the first, Chalmers for the second.

However, Chalmers’ argument can be seen to be somewhat wanting by the example I showed: he proposes that in order to implement a computation, a system must not only replicate the evolution of an automaton, but also satisfy the same transition rules. But clearly, that’s not necessary: above, we haven’t needed to talk about transition rules at all in order to get the result of the computation out of the rock, so to speak. (Regarding stochastic systems and such, it’s not a distinction that cuts any ice: stochastic and deterministic machines can compute all the same things.)

Well, but yet, there it is: there mainifestly is something it is like for me to see red, for example; there wouldn’t be any such thing in the case of the zombie-AI—it would produce the same utterances as I do, but in its case, they would not refer—it would not be true if it said, ‘there is something it is like to see red for me’, whereas it is true in my case. This is a manifest, real difference, and the root of all these troubles.

You’ll probably want to object that I’m simply deceived about this, that there is not, in fact, something it is like for me to see red. But to me, that verges on the unintelligible: the first person perspective is incorrigible by definition. If I feel that I have a headache, I have a headache—I’m not deceived about having a headache while in fact, I don’t; my feeling that I have a headache is me having a headache. Analogously, my feeling there is something like for me to see red is there being something like for me to see red, and that’s not there for the zombie-AI (not necessarily, at least). There is no room to be wrong about this, and likewise is there no room to be wrong about the fact that there’s something it’s like for me to see red.

Or can you explain to me what it would mean to feel myself having a headache, without in fact having one?

HMHW,

People who have lost limbs still have the sensation of the limb that no longer exists.

Each of your maps adds information to the system. It is information that is not a property of the rock.

You cannot observe molecular states. Indeed they may not exist, at least not in the sense of the physical characteristics we assign to them. When asked about this, Richard Feynman said that he only imagined atoms as their behavior equation rather than some physical model.

With your maps, I can map my toenails to 747s. The absurdity does not indicate that 747s do not exist. The absurdity indicates that my premises are wrong. Your maps are the premises of your argument.

You were too quick to dismiss the stochastic issue. Putnam requires that for each observed ‘p’ there is a necessary consequent ‘q’. That is basic to a deterministic system. In fact it is basic to your construction of maps.

In a stochastic system ‘q’ is random, hence no determinism. Of course you can create some f(random) on your map, but that is just symbol fondling. In a stochastic system I can compute a function using states that are random and as such cannot be mapped because they cannot be determined.

Crane

HMHW,

The real fallacy is that the Putnam (etc) argument is presented as an either/or choice. It is not exclusive. You have simply identified a new class of conscious states:

Rocks
Dead People
Toenails

No problem. As our discussion progresses these can be considered alongside cats, bees chimps and live people in such things as the mirror test and we’ll see how well they score.

Crane

In the same sense, then, information is added to any ordinary computer.

Of course you can, using, say, scanning tunneling microscopy; but of course, the argument is wholly independent of whether you can or can’t.

You can map your toenails to the functional properties of a 747, to a simulation of it; but nobody claims that a simulated 747 is the same as an actual one (while people do claim that in the case of consciousness and its simulation). You couldn’t use a simulated 747 to get to the Phillipines, for example. Indeed, the argument here achieves exactly what it’s supposed to do: to show that a 747 is not identical to its functional properties.

No matter how often you repeat this, it won’t become any more true. If you want, then the ability to draw up arbitrary maps is a premise of the argument; but it’s a trivial one, as it’s quite obviously true—what should stop me from writing down arbitrary length lists of symbols?

A deterministic automaton with enough states can do anything a stochastic automaton can: at every point where randomness is inserted, you can simply iterate through all possibilities deterministically.

So OK, at least now it seems that you agree the argument goes through—that’s progress, I guess. But now, consider that the program is left wholly arbitrary—that is, if there is a program simulating the experience you have right now, then that rock has that same experience—right now, and always, and so on. Of course, this doesn’t mean that the rock will pass a mirror test: it doesn’t have eyes, and thus, can’t see itself. Again, eyes are not functional properties of the system, but consciousness is claimed to be.

HMHW,

No, I agree that your hypothesis exists and can be tested along side others.

In the stochastic system you cannot map all possible states ‘q’ as a consequent of ‘p’ because there will only be one value not all. The requirement of your map is to provide the subsequent state. And, you cannot determine that value because it is random. In your example you mapped to colors, perhaps blue or red or yellow - not all possible colors, of course colors do not exist, they are part of the mapping process that conscious systems apply to the spectrum.

So, your mapping of an equivalent stochastic example would have to map each ‘p’ to a ‘q’ that is equal to the entire energy spectrum. In that case there is no distinction among states that result in spectral 'q’s. Your map fails to present a value of ‘q’ for a given ‘p’.

If a ‘q’ can be all possible values, then why did you limit your colors to the ones you chose? Wasn’t that the function of your map? A map that identifies sets could have as a member the spectrum, but a map that identifies colors within the spectrum or numbers within a range, has to pick one. In that case it cannot be random.

Crane

Oh yeah - the premise of your f5 map is that each of those states exist, in the sequence given.

Look, it’s an elementary result of computability theory that you can’t perform any computation with a probabilistic machine that you can’t perform with a deterministic one. As wikipedia formulates it:

[

](Finite-state machine - Wikipedia)

(Bolding mine.)

And the states in the map f[sub]5[/sub] are defined according to the actual evolution of the rock—you simply take a given time interval of the rock’s evolution, and slice it into as many states as you need.