Consciousness

“What is consciousness?” is a very different question from “What is intellegence?”, or “What is language?” or “What is self-awareness (in the sense of recognizing my physical image)?” or “What is self-awareness (in the sense of noticing that I have thoughts and feelings)”.

Dogs are clearly, obviously conscious when they are awake, but they can’t answer “yes” to the other questions. Cats are obviously con— well, it’s a little harder to tell with cats, but they are conscious. Plants are not conscious. Flys probably are not. There may not be a specific set of neurons that is the seat of consciousness, but it still is a real byproduct of the brain’s action, and you need a brain (or a sufficiently designed computer) to get it, just as you need a heart or a sufficienly designed pump to get your blood circulating.

A good intro about this (and many other related topics) is Steven Pinker’s “How the Mind Works”, which is about a year old or so and summarizes new psychological research in a way that is nearly as funny and smart as Cecil himself. I was very surprised that Cecil was obviously unaware of this book.

Interesting discussion all around!

  1. Mirrors.

  2. Intelligence vs. Consciousness.

  3. Consciousness vs. Awareness of Self.

  4. Complexity, Systems Theory, and Ants.

  5. Why aren’t you Cleopatra?

  6. Mountaintop existence.

  7. I’m not sure mirrors are a good test, at least not for proving absence of Consciousness. There’s too much cultural context involved. And they’re probably not good for proving presence of Consciousness, either - Inferring, perhaps; Proving, no.

For a new(?) take on animal intelligence / culture / consciousness see: http://www.msnbc.com/news/280690.asp#BODY (caveat emptor). Among other things, it is claimed that chimps have about the same mental ability as a four year old human. Anyone want to question a four year old’s possession of consciousness? Also, I heard a snippet of a report on the radio yesterday (caveat emptor redoubled) that seemed to say British researchers have discovered that chimpanzees can identify human family relationships from photographs of the individuals. (Mother-son relationships were most easily identified.) If anyone can find a link to that report, you might want to publish it here. If true, that would seem to indicate a higher order of understanding even than, “hey, that’s me reflected in that vertical, solid pool, there.”

  1. Sander, I didn’t intend to address Intelligence rather than Consciousness. I meant to show how Consciousness can “evolve,” or derive from, complex combinations of simpler mental processes, without having to assume the existence of a separate “essence” or a dedicated organ. For example Memory, while admittedly critical to all forms of learning and to what most of us call “Intelligence,” is needed herein (as some sort of “internal state memory” - perhaps something like an internal film) for the processing system to trace its own internal changes, and become able recognize itself, over time. In this way an awareness of an omnipresent “Self” can build up through aggregation, whereas if in every cycle of the system the world is “ever-new” (green dead ahead / fly to the green / BONK <pain> / green off the port forward quarter / turn and fly to the green / BONK <pain> / green off the starboard side / turn and fly to the green / BONK, etc.), well, there will be perception (of external inputs) and decision-making and action, but there will be no continuity of perception (ESPECIALLY of internal states) and so no chance for a concept of Self to evolve. Flies and computers largely lack this kind of processing.

  2. Ah yes, language… If by Consciousness we mean “that which ‘departs’ under general anesthesia or after a crack on the noggin,” then yes, that IS different from SELF-Awareness. I daresay all animals from at least amphibians on up have this definable Consciousness/wakefulness-as-opposed-to-sleep/hibernation. And we could all address the topic of the waking mind vs. the dreaming mind, but I have to say that the question, “Where does your consciousness [awakeness] go when you sleep” strikes me like the question, “where does your lap go when you stand up?”

I believe the original question was, “Who am I? or Who is the knower? or What is consciousness? (All the same question, roughly.)” I don’t mean to play semantic games; the questioner himself draws the link from awareness of the “I” to (self-) consciousness, and clearly limits the context of the discussion to Consciousness as synonym for Self-Awareness, as in, “the unexamined Life is not worth living [examination of the state of Self as tool to perception, understanding, and ultimately satisfaction]” and maybe, “Man, did you see that play? He was unconscious! [~Zen or Dao(?), and one unit’s functioning in the larger system]” If anyone can propose, in this context, a clear difference between Consciousness and Self-Awareness, I would be interested.

As for a reductionist-logic requirement to classify all objects as having this property (consciousness) or not having it, it ain’t that simple. This is a spectrum, and it becomes as difficult to define at the edges as it is to draw the dividing line between yellow and green. Pure yellow and “pure” green (50% yellow and 50% blue?) may be as “easy” to define as saying humans have consciousness and lobsters don’t, but when you get down into the fine mixtures, where do you draw the line? On which day did your child, or you, become recognizably “adult,” and can anyone be certain there was a dividing line day? Otherwise, must we doubt our adulthood?
4. Nope, didn’t mean to suggest complexity equals consciousness. Correlation is not causality. Everything in the Universe is connected – but not all the connections are significant to a given analysis. Nor is consciousness automatically resultant from complexity. The Universe is pretty complex - taking all levels of analysis at once it’s the most complex “system” possible - but I would not argue it is Aware. Rather, I would say that complexity Of A Certain Type – in this case, information processing capacity – IS requisite to development and maintenance of consciousness. It is the medium, if you will, on which the pattern may or may not be drawn.

A mountain, or an ecosystem, is complex, but that’s the wrong KIND of complexity. Or, if you will, the system being analyzed has been inappropriately limited, and the result is that the emergent property sought is not seen. To make an analogy to an earlier example: if I examine a transaxle (alone), or a Ford Motor Corp. factory, or the structure and communications of DaimlerChrysler’s human resources department, I will find lots of complexity - but I will not find Locomotion. To do that, I have to limit my analysis to one of the cars coming out of that Ford factory.

In doing any analysis of complexity one must define the boundaries of the system that will be examined. As an early systems theory philosopher put it (I’m paraphrasing), “There are many ways to skin a cat, but first you have to decide which part is cat and which part isn’t.” In analyzing Mental Existence, say, we might decide that we will examine individuals in the abstract (THE Personality, not A Personality), brain functions from the whole organ down through the neurons down to the neurotransmitter level, and Mind States (Consciousness, Affect, Temperament, etc.). Then we observe, theorize, test, conclude as may or may not be possible.

Just As Importantly, by doing this we are saying (among other things) that we have decided that Society is a level of analysis we can safely ignore for this exercise, that both the quantum mechanical descriptions and the chemical reaction descriptions of how a given neurotransmitter type bonds to a given receptor type are unimportant (but the gross Effects ARE important; we included them above), that personal historical experiences – which undoubtedly shape us all – are unimportant to the given larger question, that examining the solar system, the Milky Way and the Local Group will not add anything of value to our analysis, ad nauseum.

It may later become apparent that we erred in not including certain linkages / subsystems / supersystems, and it may be that we erred but did not realize it. As a result, we may not discover anything useful or reproducible. But recognize that framing the question can be critical, and conversely, in systems analysis, descriptions that are found to apply to a given level of analysis of a system are not negated merely by changing perspective. (Thus, psychological insights and drug therapies which work wonderfully with individuals may be inappropriate for application to the behavior of social institutions. Yet individuals and social groups ARE connected, and do influence each other through various kinds of feedback.)

So, are ant colonies complex? Yes. And the hand is complex in many ways, including at the cellular level. Both systems are highly interconnected from certain perspectives (neuromuscular, metabolic), but neither becomes self-aware. Even though they demonstrate internal feedback, they lack the information processing

Sorry, JoeyBlades, forgot to address your interesting example of the world with only one person. (For a very interesting if flawed view on that, watch the Australian movie The Quiet Earth. The final scene really got me.)

I assume that this world would be like ours, not some featureless void. How about other animals? In either case, the individual could draw the line between Self and NotSelf*; it would be the defining of the nature of Self that would be difficult. With no other proper examples for guidance, it might be sort of like legends of the Wolf Boys.

But Awareness? Of a simple kind, I’d venture to say yes. Animalistic by our standards, but let’s not be too snobby. What would be missing would be all the other things we unconsciously assume are Human. We would almost certainly decide this person was Insane, were we to meet hi/r, and we might not be able to establish the kind of communication that says to us, “There is a like being behind those eyes.”

  • I previously used Other, but that infers similarity, so perhaps NotSelf is better?

David,

You wrote:

I agree, this individual might develop a conceptual distinction between Self and NotSelf. I’m just not convinced that it would be necessarily so. My point really was that this distinction may not be requisite for consciousness… In other words, let’s not confuse awareness with self-awareness.

BTW
What’s up with that “other” conciousness [sic] thread?

DavidForster has greatly improved on Cecil’s shorter response, which merely scratched the surface. I agree that together with Life, Self-Awareness is necessary and sufficient to produce consciousness (=“the recognition by the thinking subject of its own acts or affections” - Hamilton).
As to whether complex systems such as ant colonies or Human Resource departments can exhibit self-awareness, I don’t know that I am knowledgeable enough about such matters to provide a learned opinion. But who cares, my guess is yes. The reason is obvious: we are all complex systems combined of billions of individual, independent, un-conscious cells.
Language, physical/tactile consciousness, and so on are probably unnecessary.

A couple of good resources to follow up are:
Schroedinger’s essay, “What Is Life?”
Stanford Encyclopedia of Philosophy on the web, http://plato.stanford.edu/contents-unabridged.html
Alan Turing’s paper, “Computing Machinery and Intelligence”, in which he proposes the Turing test, http://www.abelard.org/turpap/turpap.htm. Too bad Turing couldn’t have written this column.

The state of non-bliss implies some knowledge.

The question of consciousness does not hinge upon whether or not we can create a machine that mimics our subjective experience. Conscioiusness is not thought, per se, but rather the awareness of awareness, the awareness of thought–the presence of an internal monologue. As Descartes’s wife said, “I think I think, therefore I think I am…I think.”
Computers are fundamentally different from the manner in which the brain is organized. The former are largely linear systems. The brain is massively parallel and massively interconnected. Most importantly, the brain exists inside a biologic entity called a body, which provides a cacophony of inputs–sensory, hormonal, metabolic, toxic and traumatic. One cannot divide the brain from the body, they are all of a piece and cannot exist one without the other. Rather than designed from scratch to spec, the brain is an evolutionary and experiential garbage heap as well with layers of obsolescent technologies and routines running beneath the more advanced “Consciousness98” OS. (Hmmm. Rather like running Windows on an Intel box in a way.)
The higher aspects of consciousness that are most of interest are meta-phenomena of the escalating complexity of the mammalian central nervous system and in human lineage, at least, go hand in hand with the evolution of language. Nature shows us that when you put enough of something in one place, the rules that govern it often change. Put enough hydrogen in one place–you get a star. Put more and you get a black hole. Put enough macromolecules together and DNA starts replicating itself. Hook enough neurons together and in some bipedal prehensile ape-oid fellow and, voila, consciousness.

As previous posters have correctly noted, Cecil did pass the buck on this one. He gave the questioner an irrelevant answer. Straight Dope readers deserve to know Cecil’s opinion on consciousness. Is it epiphenominal? If so, how does he account for any kind of epistemological justification? How can any sort of truth exist? Contrary to some of the dismissals encountered here, substance dualism seems like the way to go. It is the only way to account for consciousness, which entails a lot more than simple awareness. Any other option would be not only reductionist, but self-stultifying. If consciousness is to be explained only in physical terms, then it follows that we are only a bunch of electro-chemical reactions. This would mean we can never know truth, for we could never know anything at all. Our thoughts would be nothing more than reactions and so would lack any sort of purposeness or intention or even will. We could not even truly know that we are communicating with each other; our personal electro-chemical reactions just make it seem that we are. Perhaps this is why Cecil avoided answering the question in favor of giving us a history lesson on AI. He doesn’t want to admit to substance dualism out of fear that his credibility may be blown.

Hey, c’mon, folks… Cecil started by saying, what, in 600 words? … and he chose to focus on whether “consciousness” can be simulated; the underlining question is whether/how we could mimic or detect it.

Notice the length of some of the postings here. There are great huge volumes written on this subject, and the conclusion of most is that it is unknowable.

Cecil chose to take the question in a direction that can be answered, by focusing on artificial intelligence.

But CK, this IS why it’s good we have a message board here.
Jill

I realize many of you think this issue tangential to Cecil’s column, and I realize most of you think I’m wrong, but this begs a few questions:

Are any animals aware of the connection between the sex act and the appearance of their offspring? Are any animals aware of their own mortality? Are any animals endowed with free will, the ability to always choose otherwise?

The answer to all is “None save man.”

Animals are intelligent to varying degrees, but only man posesses intellect, that ineffable quality that is prerequisite to conciousness. Other apes might recognize their own features in a mirror or in their progeny, but man alone is aware of the connections of reflected light and reflected DNA characteristics.

No one is taken to calling chimpanzees intellectual creatures, nor can it be properly said that they are anti-intellectual, as is certainly the case with some humans. The difference in the brain of man and those of the lower animals is one of kind, not one of degree. Man alone among the animals posesses free will. All other animals lack the qualities of conciousness necessary to choose otherwise. Their behavior is determined, ours is not.

Nickrz writes:

I’ve had to bite my toungue a few times in this thread… but since you ask…

Not a good example… some humans don’t understand this connection.

Also, not a good example… for the same reasoning as above. Also, how do you know that apes (or any other animal, for that matter) are not aware of their own mortality?

Well, I can’t argue with you here because I don’t completely understand your point. What choices are you referring to? Again, how do you know that animals don’t make choices?

Seems by your definition, the young and the mentally handicapped may not qualify as conscious?

What evidence do you offer that intellect is prerequisite to consciouness? What evidence do you offer that only man posesses intellect? If it’s ineffable, how do you define it’s limits or it’s identifying criteria?

So in other words, prior to our understanding of these concepts, man could not have been considered conscious?

Not only are you off on a tangent from the original topic, I think you’re on a tangent from your own point… currently you are arguing that only man is educated.

So this argument goes on forever. All it arises from is basically two kinds of people – those that think there are two kinds of people and those that don’t. . . No, I mean, those that fear the loss of (hu)mankind’s worth, or feel it humanity is a goner at such moment as it be viewed as part of a continuum of molecular evolution or as its behavior patterns become indistinguishable from those of inanimate artifacts it should construct, i.e., silicon-processor-controlled gizmos or whatever – versus those who aren’t on that trip, hangup, religion, or whatever, and just hang around this universe, having fun conjuring up attempts at such gizmos or just speculating in free thought.

Since Nickrz seems to be labeled ‘moderator’, I guess I should watch what I say here though. . .or my post won’t see the light of day. He may deem it just communicational artifacts due to sunspots.

Well, to me, most of these chasms of argument arise merely by viewing the same things differentially between the dual aspects of the subjective and the objective. My above statements are, of course, subjective. Objectively, there are only correlations of pixel data between posts coming from somewhere in this universe – perhaps an amused chimp with a PC or an organized gas plasma that learned to lase a comm link to Earth.

It seems to me (from both a mentalistic and hardware viewpoint) that behind this objective/subjective bifurcation lies the pragmatics of a complex organism’s (surely, though, not limited to just a human’s) being able to thrust against entropy, in the same lifetime and universe, by use of both a bottom-up synthesis of concepts like sticks and stones and an empathetic, social analysis of complex behavioral concepts too complex to deal with in the bottom-up manner. I would further speculate that one studying the organization of the human or higher-animal brain ought to be able to find some physical bifurcation, in the structure of such organs, which correlates to this split in cognitive method.

But getting back to the posts here, I am particularly puzzled as to the stance taken by Lipochrome, a person who claims quite an involvement with computers and additionally has explored software neural nets, neurobiology and perhaps some sorts of so-called cognitive science. He would seem to defy my simplistic dichotomy of people, since, in spite of his computer and neurocomputational context, he appears to feel computers inherently have no chance for stealing advanced human roles and picks on such things as their hardware-substrate-level means of memory storage, and also the non-existent difference between the existence of something and its simulation to the full extent of the domain covered within a given argument.

In his position, he certainly recognizes that artificial systems which most nearly approach human-style information processing are layered up from the very inhuman organization of their bottom-level silicon substrate. Neural nets, though of course, much simplified from their biological correlates, provide their own level of memory in the weightings of their synapse analogues, leaving the substrate’s inflexible mode of memory irrelevant to arguments over their ability to (excuse the expression) ape humans.

Lipochrome also clearly draws a general, distinct line between existance of something and its simulation. In any given discussion, only a finite range of the complement of attributes of a concept is germane to such discussion. If what is “simulated” involves everything that is at stake in the discussion, that “simulation” also exists as the stereotypic entity upon which the discussion is centered (and stereotypes are what the brain and other neuro nets are all about, “PC” or not “PC”, so to speak). Thus, if the stand-in for a human happens to have (as a single individual or as a stereotype of a genus) only three toes per foot, and the argument is over humanoid complexity or “hummanness” of activity/“thoughts” in the brain/mind – one should not discount this stand-in in respect to its capacity for “human” behavior, in short, its “humanity”, on the basis of its lack of two toes per foot. Of course, one may note that, to simpler humans – and in a way, even to more complex ones – a human-looking face painted on a very intellectually dense robot is likely to strike more of a human chord than is such robot’s rendition of human intellectual feats. Then again, many human amputees have far fewer toes than six. Similarly, should the artifact employ, at its lowest level of organization, non-content-addressable memory, this is immaterial. At the lowest level of objective organization of the human brain, memory results from mere molecular bonding, as we apply modern chemical science, though such scientific modeling have little interest to so-called “humanists”.

No one (except Creationists) should argue against the fact that language processing in humans has involved a significant evolutionary change in the wetware of their brains from that of those ancestors they have in common with the great apes. As one with an engineering background who has experienced some capacity for non-verbal innovation (exercised also by artists) – from an introspective view, I highly object to any notion that “humanity” is predicated on an organism’s or mechanism’s ability to manipulate well-defined symbols; in fact, such manipulation can be exemplary of very poor engineering or art, and very commonly, of not very uplifting airheadedness. But some literary academics get really wound up on the humanness of symbol-tossing. (No doubt they would they claim that I have done a poor job here.)

One should note that today’s commerce in computers is free-market, except perhaps, in academia and government labs. Open commerce, of course, does not have as a goal the simulation or replacement of human beings. Thus the speed of evolution of more humanoid computer intellectual behavior is not as fast as it would be were we hell-bent on replacing ourselves. There have been a few academics who have announced that they were on such a direct artificial-replication pursuit. I believe they have found themselves rather limited in funding, which may be the reason they have of recent made no great announcements of progress. OTOH, they may have been better at symbol manipulation than at the subverbal talents necessary to their announced goal.

In comparing digitally (or analogically, if you must) implemented centralized information processors with the human brain, it seems to me one has to take into consideration two basic structural-implementation factors – 1) specific patterns of physical organization and 2) brute-force quantity of elements. What’s hanging onto this assemblage and is in its sensory and social/communication-link environment are also part of the formula for the ultimately comparable behavioral results. A human brain directly interacts, of course, with other organs of the body that contains it – in part, in such ways as to sustain its body within its particular sensory and effectory environment, and in that containing others of its kind with which it can communicate; and it is given a lifetime to informationally interact with these, although this may include some rote imprinting, as well as much more complex interaction. Such a brain has some very specialized parts in order to do this and also has somewhat modularly arranged cortices, generalized to varying extent, to adaptively modify its body’s behavior in these and less-apparently applied tasks – such as the instant one. (Occasionally one of the latter may get the species over a hump in its evolutionary contest with its environment, but not often. Like the moderator can say this individual “gushed” and squish its linguistic stain out of existence.)

It has been mentioned that advanced human-simulating / humanoid systems may require time-consuming “raising” within certain environments while attached to appropriate sensors. Of course, some time can be saved by canning some of the results of this on storage media and feeding it s

I forgot to note, in reference to the “brute-force” component of humanoid manufacture, that the state of the art in either hardware or software neural-net structures, as I understand it, falls many orders of magnitude short of the numerical claims made for the human CPU of something like 100 billion neurons, each interconnected on average to 10,000 of its fellows. This limitation alone restricts humanoids to very minor leagues.

Ray’s machine

This is the last statement in a whole series of statements and silly arguments that show you understand nothing (or are being purposely opaque) about the valid questions I raised. If you do not understand “my point” behind the concept of free will or determined behavior, then I suggest you buy a copy of Philosophy 101 and bone up before pooh-poohing things you have no knowledge of. Oh, and while you’re at it, try finding out the difference between intellect and intelligence. I’ll refer you back to the books in my original post.

Egregious attempt at the use of sarcasm to influence my behavior and aspersion cast on my character duly noted.

I was preparing to eliminate all posts in this forum that disagreed with me, but Lo! and behold, much to my surprise and consternation, I don’t have that power in here. Drat! But then again, why would I pass up the opportunity to label as “unadulterated stilted blather” the following uncanny spectacular misuse of the English language.

Nano GIGO. Now, if you’ll excuse me, I have some entropy that needs thrusting against.

Anything Nickrz doesn’t understand, in his little species-centric den containing The Rules, as set out in Philosophy 101 and other sources of limited views of the real world, he apparently conveniently labels “misuse of language”. Maybe he should try to grow out of his knickers and relate to a larger world not based solely on the views of monks and whatever. I agree that his character doesn’t need to be highlighted for clarification; it sticks out there like a well-self-polished sore thumb or bump on a log. If others had stopped at Philosophy 101, or maybe even failed to avoid it, we wouldn’t have any computers here with which to disagree. . .and I suppose we would just have to go at each other with sapiers, right? (Of course, we only got as far as sapiers because some, who thought beyond the pictures of animals on the cave walls, decided there ought to be something more effective than the clubs called out in The Rules of an earlier pictographic Philosophy 101.

On a different tack, one comment I meant to make is that I don’t know whether any charting of a theoretical course toward producing artifacts having the nature and complexity of human cognition and control, considering the use of layering of the basic hardware and software one can think about via modes of design of today, really relates to a possible realization within a reasonably procurable quantity of the necessary doped-silicon matter and such possibly convergible software as would be needed in order to produce something, no matter how awkward, that would function at a time rate somewhere near comparable to the reasoning of humans. However, such a failure to implement with technologies forseeable today would not shoot down more refined approaches to such implementation that would not be composed of carbonaceous neurons.

Ray

I wasn’t going to get involved in this thread, but I just have to jump in here.

NanoByte, that last paragraph of yours is practically gibberish. You could have said the same thing in half the space, but instead, you intentionally use large words and overly-complex sentence structure in an attempt to make your argument seem more intelligent.

The aforementioned circumlocutory, periphrastic grandiloquence necessitates ocular review of an inordinate duration, previous to complete comprehension, yet fails in contributing to the import of the text. A request is made from me after reading it, that you, having written the post to which I’m referring, in the future kindly moderate your verbosity.

(In other words: “It takes much longer to read but doesn’t add meaning. Please quit it. Thanks.”)

Uber: a comment. You say that Life and Self-Awareness are necessary and sufficient to produce Consciousness. (I would agree accept that, until someone clarifies it better, I see Self-Awareness as being the same thing as Consciousness, which exist(s) in more or less complex form among a variety of species.) While sufficiently-complex information-processing-capability is assumed in self-awareness, when it comes to Life I see it differently – DEPENDING on our mutual definition of Life. (I assume some sort of biological/metabolic process.) I presently cannot imagine Consciousness developing independently of Life, BUT once it has developed I see no reason to assume “Life” continues to be a necessity for the maintenance and transmission, or reproduction, of Consciousness.

Jack Rambo: Yes, I think you put it very well when you point out that computers (and flies, etc.) are largely linear, while we “Thinkers” are massively parallel and massively interconnected. It is that increasingly interconnected (informational) complexity that allows for the development of higher Consciousness, and which is missing in ant colonies and ecosystems and, frankly, corporations. But it may not ALWAYS be missing from society… On good days, I predict that society as an organism, or system, or system of systems of organisms, will achieve “Consciousness of Itself” before a single ‘computer’ does; I think in view of recent developments that it’s less of a leap, especially given the assumption that the base components already embody ‘Conscious’ abilities. But would a component of that system (i.e. you, or me) ever be able to Recognize that ‘Social Consciousness’… whatever form it takes? That may sound flaky, but for those who understand what I have been saying here, I submit that the same principles could apply on a slightly larger scale.

When you point out that, “when you put enough of something in one place, the rules that govern it often change,” it is also as true to say that when one changes the scale / perspective / limits of the system one is observing, the possibilities / behaviors / outputs change. (Think of the following systems – cell, organ, entity, tribe – and you can understand.)

MendozaR: with all due respect, I think that substance dualism is an explanation arrived at by our ancestors who couldn’t understand the question or the playing field, let alone the answers. At the risk of sounding argumentative, I would submit that the standard beliefs in this area are those which are reductionist, as well as ultimately tautological. You say that, “If consciousness is to be explained only in physical terms, then it follows that we are only a bunch of electro-chemical reactions.” There are two ways to answer this, depending on where we draw the boundaries and the nature of the definitions being assumed: 1) Yes, and…?, or 2) not at all; reducing the complexity out of our statement, or definition, of the system APPARENTLY removes the complex property or state being sought, but this is an error of definition not understanding. Are Van Gogh’s or Rembrandt’s paintings “only a bunch of” brush strokes of various colors? Is the screen you are looking at “only a bunch of” vari-shaded pixels on one side of a glass surface, or is there more meaning than this embodied in what you are looking at even now? At one level of analysis, you ARE “just” looking at a bunch of dots. At another level, those dots form patterns (characters, and “words”). At another, they can parse as sentences that embody complete ideas – and at yet another they can instigate formation of associations to all sorts of other ideas and experiences you have encountered throughout your life. Do those physical phosphor-dots therefore embody another sort of essence, some sort of “Communicative Property” which is slight in any one dot but builds up into a Message in sufficient quantities? No, of course not. It is the PATTERN which is important. And thus with Life – it is the structure and metabolism of the body that allows it to move and breathe and reproduce – and thus it is with Mind – it is the Pattern of our (electro-chemically-based) thoughts that makes us.

You also point out that, “We could not even truly know that we are communicating with each other…” Well, DO we know it? Exactly HOW do you know I exist, for example? Heck, even with those I meet face-to-face, I often wonder if we are truly communicating. Given the nature of their responses, I am frequently forced (I could tell you HORROR stories!) to doubt it – EVEN if substance dualism explains both our being there.

Nickrz: Concerning the posting of 6/21 06:50 AM (and WHAT are you doing at that hour? Either you work too hard, or you’re one o’ Them – a Morning Person), I think JoeyBlades has done a good job of reflecting questions back on certain of your questions, assumptions, and assertions, so I will merely add support to his response. (I realize that means little or nothing.) But as for your final paragraph I must heartily disagree. Everything I have argued here is that the mental difference between humans and other animals IS one of degree, NOT of kind – and that, even leaving the possibility that our hubris blinds us aside, the degree of difference can be great enough to lead one to suspect that it is a difference of kind (even if it isn’t).

The human tendency to view invention and innovation as proof of consciousness and higher-order thinking, and to believe that only We innovate and therefore only We are conscious, has the strength of historic dogma behind it, but in reality it is more and more clearly being proved to be a self-aggrandizing myth. A neat little Science News article can be found at http://www.sciencenews.org/sn_arc99/6_5_99/bob2.htm. The truth is, we humans typically don’t see what we don’t want to see, or what we aren’t culturally prepared to consider as a possibility, and in the West this has resulted in our seeing ourselves as shepherds and the literally(?) dumb animals as Lesser Beings. (There is evidence of verbal communication in many species that contains more than pre-programmed, genetically-or-otherwise determined information.)

Concerning animal intellect, one of my favorite animals has always been the female Japanese macaque who not only invented the washing of sandy potatoes in water – a behavior never before observed among her group or species – but who, years later when the researchers started putting rice out on the beach sand, was able to realize that similar behavior could make that food supply useful, too. I’d say that hers is not only a SMARTER solution than some people I’ve known could find, but that it indicates a higher-order understanding, of herself and the universe, than mere determined behavior would allow for. (Or to put it another way, you can argue that her realization – and that of her alone – concerning the generalizability of certain behaviors to other objects and situations was in some way automatic, determined, unconscious, lacking in free will(?), but then I would argue that exactly the same is usually true of many people.)

From the article above, I have to say my favorite was, “a house sparrow [that hovers] in front of the sensor that triggers the automatic door at a bus station and then [flies] inside for food.” This is a bird with a brain the size of – what – an apricot pit? A cherry pit? Think about it – the bird had to somehow associate being in a certain location with the opening of a door AND with the concept of an “inside” of the bus station that exists independent of the door’s state. If nothing else, EVEN if we argue that this might be nothing more than a “lucky” association and/or superstitious behavior (meant in the psychological sense of superstition, an apparently unrelated behavior included in a learned sequence of actions) which turned out to be realistic by pure chance, it still brings into great question the nature of discovery, understanding, and invention among humans. Perhaps innovation is not (usually) a ‘purely intellectual pursuit by the consciousnes

NanoByte writes:

[snip]

I think Lipochrome is dead on with his assessment of the parallels (or rather, lack of) between AI computing systems and the human brain. So add me to your list of people who “defy your simplistic dichotomy”.

Ten years ago, folks in the AI field believed that neural nets were a close model to the way the brain works, but today not many serious practitioners would make this argument. Certainly there are elements of functionality that are shared between the two, but we don’t have a comprehensive understanding of how the brain works. It’s quite true that there is some massive parallelism going on in the brain, but perhaps it’s not in the way we suspect. The way the human brain works is like a giant jigsaw puzzle and we’ve managed to piece together most of the border and consolidate a few distinct chunks in the middle. We’ve assembled (maybe) 10% of the total puzzle and we’re trying to guess what the picture is…

Don’t let science fiction writers fool you. Neural nets are not the answer. They can do certain specific tasks well, like pattern recognition (still not as good as the human brain), but ask a neural net to add two numbers and it will fall flat on it’s ass.

I don’t think we can say. As Lipochrome pointed out, memory is “smeared” across the brain in ways we can’t even fathom. The mechanisms for memory may extend beyond our simple view of electro-chemical reactions between the synapses.

Also, NanoByte writes:

So… you realize you have a problem… I’m sure there are 12 step programs for people with your problem. [wink]

Nickrz,

You wrote:

I believe my points did serve to invalidate your questions, or at least show that your questions have an inherent bias (education). Rather than simply calling me silly, how about attempting some sort of rational argument?

You missed my point. Your “Philosophy 101” will have all the same biases that you’ve been injecting into this discussion. I believe you are working with a flawed notion of free will, but that’s why I asked you to define it more clearly. I believe animals DO exhibit free will. Have you ever watched a dog come to a junction in a road, stop and look left, then right and choose a direction? Have you ever seen a cat refuse to eat the new brand of cat food you bought? Animals demonstrate free will all the time. Sure, you can overlay boundary conditions such as requiring the decisions to be of a moral or intellectual nature, but that gets us back to my point about educational bias. BTW, I didn’t “pooh-pooh” anything. I merely asked how you know that animals (and that would have to be ALL animals, save man) don’t exercise free will.

Now that’s funny. Previously you claimed:

I started to challenge you, at the time, that intellect might not be inherently ineffable, but decided to avoid that argument altogether. Now you claim that you can distinguish between intellect and intelligence…
OK, smart guy. [wink] Let’s hear your distinction.