Consciousness

DavidForster shared a few stories about animal inventiveness… I thought I’d share a few of my own.

There have been quite a number of studies where apes have been taught sign language. Nickrz argues that these animals are merely exhibiting a pavlovian response. I agree. At least in the early phases of these studies. Most of the teaching techniques center on rewards - not too unlike early learning mechanisms in human communication, BTW. However, in many cases the subjects end up communicating ideas that have no bearing on rewards. If there’s no reward, why would they bother to “communicate”? Still not convinced??? I don’t blame you, but here’s the thing that makes me suspect that apes may have some level of consciousness. In several of these studies, new apes were introduced to the trained groups, but given no training from humans and given no rewards for adopting a habit of communicating. Invariably, the trained apes took it upon themselves to train the new apes in this new found art and the apes would communicate amongst themselves without humans even being around. I simply can’t dismiss the posibility as easily as Nickrz…

A couple of other examples (not in the same class as above, but makes me think, anyway):

I used to have a dog; a pug. His food and water dish was in the kitchen beside / in front of the refrigerator. One day my wife and I had the refrigerator door open, trying to decide what we wanted to have for dinner. The door was blocking the dog’s dish. The dog came over and stood for a moment, then layed flat on his belly and extended one paw under the refrigerator door and dragged his food dish out to the center of the kitchen. If you know pugs, you know this is not a simple feat. They are not built for this sort of manouvre and he had to gradually scoot his body backwards as he dragged his dish. A casual observer would have deduced that he was trained to do this trick, yet he only did it that one time.

One of my neighbors had a cat that learned, on it’s own, how to turn on an electric piano and play it. The cat seemed to recognize that it could press certain buttons to change the voice and that different keys would produce different sounds. The cat would routinely go into the study, turn on the piano, and tap out random tunes to entertain itself… The neighbor tried to teach the cat a simple tune, but you know cats… he had no interest in pleasing it’s master… I watched this cat one day and can attest to three things (1) the cat clearly and deliberately turned on the piano [it was no accident] (2) the cat then layed down on the back of the piano and tapped along the keyboard at various points producing sounds and (3) the cat had no formal training in music theory.

By the way, Jack Rambo, you made reference to the “Consciousness98” OS.

I am not sure I can communicate the extent of my outrage. My mind most certainly is NOT running on a Microsoft product!

At least, not yet.

Alas, gentlemen, I will not have time until Friday to address your questioning of my viewpoint. (Right, like you’re all holding your breath).

No. Actually, I was starting to turn blue… [wink]

Returning in a more rational frame of mind -

  1. You have me on the “ineffable” statement. The word should have been “ethereal.” Substituing the wrong word for the right thought is one of those effable qualities that separate humans from the lower animals.

  2. I have neither the time nor the inclination to launch into a debate over the
    characteristics of free will vs. determined behavior. Either you understand the concepts or you don’t, imho.

  3. Denying the validity of my question “Are any animals aware of their own mortality?” by stating “Some humans are not aware of theirs” is wholly specious argument. I refuse to indulge such meandering. (And likewise with the reasoning behind JB’s other
    “questions”).

  4. I’ll refer you back to my original post and the book citations therein. If any of you are truly interested in this opposing point of view, read them. I dare you. I suspect none of you arguing for lower animal intellect wish to have your ridiculous assertions and ideas challenged by anyone who has the philosophy and facts in ready form, as Dr. Adler does. I don’t need to rewrite his ideas here.

Ta Ta, Gents

Be that as it may, who’s up for a game of volleyball?

{{Are any animals aware of their own mortality?}} Nickrz
How would we know?

{{Are any animals endowed with free will, the ability to always choose otherwise?}}
Yes, absolutely.
{{The answer to all is “None save man.”}}
Sorry, wrong. Indeed, it is one of the brightest red flags there is to go about stating that such and such is what separates man from “the animals.” The difference is that we’re smarter – period.
{{Animals are intelligent to varying degrees, but only man posesses intellect, that ineffable quality that is prerequisite to conciousness.}}
Uh huh, right.

{{ Other apes might recognize their own features in a mirror or in their progeny, but man alone is aware of the connections of reflected light and reflected DNA characteristics.}}
We weren’t always, and many still aren’t.

{{No one is taken to calling chimpanzees intellectual creatures, nor can it be properly said that they are anti-intellectual, as is certainly the case with some humans. The difference in the brain of man and those of the lower animals is one of kind, not one of degree.}}
Any scientific support for this assertion? I’d say a bigger, better-developed cerebral cortex is the main difference, and that sounds like “degree” and not “kind.”
{{ Man alone among the animals posesses free will. All other animals lack the qualities of conciousness necessary to choose otherwise. Their behavior is determined, ours is not.}}
Nonsense. Of course animals can choose what they are going to do. Where do you get this stuff?

“Marge, don’t discourage the boy. Weaseling out of things is important. It’s what separates us from the animals … except the weasel.”

Consciousness is a faculty of awareness. Awareness is the ability to perceive what exists.

Overall true awareness is not passive, but active. Your body reacting to stimulous is not awareness because it can react whther you are aware of the source or even the presence of the item causing the stimulous.

Consciousness is is action of perceiving items you are aware of and correlating them to actions and reactions that you are also aware of in a preset definition that you have created during your experiences.

In short conciousness is a state of action, not reaction.

To determine if something possesses a conciousness is much more difficult. You need to ask yourself is this thing simply reacting to what I say or do invoulentarily, or did it respond based on a database or preconcieved truths it had created based on definitions.

An example: A computer will akways respond to input based on a program at some level. that program is implanted into it and not learned. It is not true conciousness because the computer did not reach it’s status by learning data and by trial and error, but rather it was handed to it (as demonstrated in the box example).

A dog on the other hand, will learn that certain sounds produced by humans are meant to be responded top by certain actions. Now the dog doesn’t understand english, but the act of correlationg the input to the response requires conciousness at some level.


To deal with men by force is as impractical as to deal with nature by persuasion.

[[A dog on the other hand, will learn that certain sounds produced by humans are meant to be responded top by certain actions. Now the dog doesn’t understand english, but the act of correlationg the input to the response requires conciousness at some level. ]]
Not just responses, but also an awareness that certain things usually mean the existence of other things – e.g., the can opener, the car keys, etc.

Why is it that every time I try to read modern philosophy I get this icky feeling in my brain??? Since Nick insisted that we had to read Adler to understand the nuances of his arguments, I did just that (as much as I could stomach, anyway).

At first I wanted to like this guy because he claimed to be both a philosopher and a scientist and interested in merging the two disciplines. The more I read, however, the more I realized that this was no scientist. He admits to being confounded by the atomic view of the world versus the “tangible, real” world. I won’t go into the gory details, but suffice it to say that he accepts the atomic model of the world intellectually, but is skeptical to the point where he all but dismisses it as an “analytical aspect of reality, not ultimate reality”. Worse yet, he has turned the Heisenberg Uncertainty Principle into a philosophical argument about the nature of reality. He is either waxing metaphoric in ways that I can’t fathom, or he doesn’t really understand the principles…

If there’s one thing I hate it’s when people demand that something is obvious and therefore beyond proof. Nickrz did this earlier in this discussion and Adler does it consistently. as well. Claiming something to be obvious and therefore dismissing alternate viewpoints is a classic philosophers tool. Adler loves to “define” things, especially common words that are sometimes interpreted in a multitude of ways. So in that spirit, I suggest we define the word “obvious”. The definition I like is, “that which is perceived by the large majority of individuals in the same way”. Clearly, by a definition such as this, what Adler claims to be obvious is far from it.

Here’s a few quotes from Adler that illustrate the point I’ve been trying to make all along:

All of these arguments have the education bias (there were dosens more). They do not validate the assumption that only humans have the capacity of intellect.

The capper, for me, in reading some of this Adler stuff was this quote:

Yeah, right. There are more things in heaven and earth, Mortimer, than are dreamt of in your philosophy… How am I supposed to take any of what he says seriously? For the record, Einstein’s famous quote was in reference to the Heisenberg Uncertainty Principle not quantum mechanics in general, and it was meant to make a humorous point about measurement versus reality. This was apparently, completely lost on Dr. Adler.

In my book, Adler is just another anthropocentrist, vainly trying to validate human superiority…

I’ve said it before and I’ll say it again. I’m not maintaining that animals do have an intellectual capacity or that they are self aware / conscious. I’m merely saying that the jury is still out. If Nickrz or Mortimer Adler have some real evidence to add to the case, then I’m listening, but so far it’s been Nolo Contendere.

THe original question was not about animals, but rather how you can tell whether anything is concious or not. and I stand by my previous definition that consciouness is a standard of comparison of that witch we are aware of. Awareness leads to conciousness. Conciousness exists at many different levels - Imagination, comparison, evaluation… but overall the lowest conceptual deominator of conciousness stems from awareness of an action or item and awareness of a reaction or action towards said item and correlating the connection between the two. A computer cannot make these deifinitions on the fly unless they are part of it’s instructional code, once you leave that base it cannot improvise, it ceases to function.
Many animal life forms (most in fact) can make their owan action reaction corellary based on experience, and when faced with a new scenario will choose a reaction based on similarity of definition of previous actions.


To deal with men by force is as impractical as to deal with nature by persuasion.

As to AuraSeer, I wouldn’t expect to relate well to anyone that would choose a username like that. I certainly don’t defend the long awkward sentence constructions I used, but my choice or words was certainly not extreme, but apparently they were quite outside AS’s domain. I chose the words mostly to outline a large ballpark I felt was more appropriate to this discussion than the more limited ones previously presented. I did not fill in all the niceties between them. I ran on with the complex sentence structure to try to get more complex relationships between them than I could’ve quickly by simpler sentences, but I certainly agree that, for a proper, formal discussion, shorter sentences with the complex syntax replaced by more human-readable semantic tie-ins between the words would be called for. It’s just that I don’t find that all so easy, given the concepts and relationships I want to express. I was certainly not trying to express whatever AS wanted to hear, obviously. Likewise, I wanted to make some somewhat abstruse points, rather than recount such as the very charming anecdotes of DavidForster and JoeyBlades, which, of course, come across very well in rather simple, everyday language patterns.

(Not sure I believe the musical-cat story, but it was neat. Check the latest Atlantic Monthly for some alternate thinking on dogs. Also, in the pet pig department, here’s an improvement – 2nd article, starting at “BRAVE NEW PIG”:

)

As to JoeyBlades on 6/23:

Of course, I do really recognize that people do mix their paradigms; I’ve been told that there are pretty biologists who believe in Creationism, and of course I’ve met people who seem to have conflicting outlooks constantly running against each other. I do think the two stances I distinguished, however, are quite useful, or at least seem readily distinguishable, when one wants to lay out the ways different people look at life/matter-energy in the universe and their part in it.

As to what you say about neural nets: I haven’t looked into them or what’s said about them since that decade. Back in '86 to '88 or so, I checked out what was going in a couple of free-form regularly meeting groups in Silicon Valley on that subject, and picked up a number of books on the subject, even trying to read Stephen Grossberg (Boston U.). A couple decades before that, when it still appeared that we had no idea of how the brain worked, and people said stupid things like “You only use 1/10 of your brain,” Prebrum at Stanford said the brain worked like a hologram. I didn’t think that shed much light at all on the subject, so to speak. But the basic essence of organization and processing of information in the brain seemed to very logically fall out of the so-called PDP analysis/synthesis.

What has happened since that has changed this? This way of looking at the cortices of the brain seems pretty good to me, as far as a very insightful overview. Certainly all the finer points of the possible types interactions between the many types of neurons and the electrochemistry of the many sorts of neurotransmitter molecules are not yet extracted, and the specialized suborgans of the limbic system and whatever are still puzzles to a great extent. But I’d say we have a good enough handle on the nuts and bolts of the human and other brains for the purposes of the discussion here. What is it you think “serious practitioners” are saying about the brain today that contradicts what they said about it 10 years ago? And what sort of thing do you consider to be missing in the neural-net model that would be necessary to make you feel it relates to the discussion here? I mean, statements like “. . .we don’t have a comprehensive understanding of how the brain works,” bounce off me like ones that say God forbids our knowing such. You say, ". . .perhaps it’s [the “going on in the brain” of massive parallelism’s] not in the way we suspect. Well, “perhaps” anything. I don’t know of any basis for thinking our thinking doesn’t go on in basically the manner of neural nets. I don’t think any imaging data refutes this. You say "the way the human brain works is like a jigsaw puzzle [of] which we’ve managed to piece together most of the border [etc.] – “maybe 10% of the total puzzle. . .” I don’t care what percentage is chosen; I think we’ve correctly set out the overall model, much as we can set out the general pattern of the whole genome, while still having to fill in a lot of the many lesser details of linkages that correlate to many very specific bits of organs and their functions; but the general structure is apparent and that is all that I certainly would think would be needed here in this philosophical discussion. Would you expect some kind of thing in the other 90% of details to countermand the general idea of thresholded nodes in a PDP structure?

I never read science fiction. You just bluntly say, “Neural nets are not the answer,” with no explanation given for this statement. Why should I swallow that? And what makes you think neural nets can’t add two numbers. They certainly can – at least as well as humans can. But actually an artificial neural nets can learn to add two numbers without any mistakes ever. Artificial neural nets can perform all logical processing as well as a lot of the complex processing that is usually not though of as logical in the narrow sense.

And you can think that the lowest level of the implementation of memory in the brain gets involved with microtubules or quantum mechanics and all that jazz, but there’s not reason I would want to believe any of that. That’s the sort of thing that is science fiction.

And as to 12-step programs. . .I think some other people ought to get reprogrammed with algorithms that have quite a few more steps, in order to sort some of this out a bit better. :wink:

As to free will versus determinism, how many millenia can that squabble drag on for? My regard for that simply ends with the viewpoint that any behavior in an organism or mechanism or whatever can be looked at as the result of either free will or determinism, according as one see it objectively as a probabilistic outcome, or see it subjectively/empathetically as a teleological thing such as the observer feels (s)he exerts. Maybe my “problem” is that I have never seen any auras.

I do love JoeyBlades’ trapping of the ineffable difference between intellect and intelligence. My effability quotient, as in the case of yours, I guess, does not allow me to separate the cerebration of humans from animals in the sense of basic kind rather than merely degree. The degree to which one should recognize “consciousness” in either another human, an animal, or a robot just depends on the extent of one’s empathy, seen subjectively, or one’s degree of processing correlation, seen objectively, with any such entity. At least I can conceive of an autistic human raised with a very complex machine’s being considered as feeling that machine is conscious. But what the heck, primitive humans have long considered such things as mountains to be gods in their likeness of consciousness.

And I’ll settle for running DavidForster’s mind on a Linux OS. . .with lots of enhancements, of course.

Actually, as to the notion of consciousness, we have a number of meanings to the term. Some here want to attribute consciousness only to humans and not animals, but I’m sure that at other times, they claim humans are not conscious when they are asleep. But then they have to worry about whether they’re conscious or not when they dream. And what about the guy that just got found guilty of murder, although he claimed he plotted it all out (?) while sleepwalking? Do I assume I have to be conscious while killing someone to be found guilty of murder? And if we allow animals to be conscious, should we execute them for murder? And if you’re in a coma, are you less conscious than when you’re just sleeping? And while you’re cons

<sigh> Nano, you are missing the point entirely. I was merely trying to demonstrate the fact that conciousness is outside the grasp of non-sentient beings. Any organisim (or machine for that matter) that requires informtaion to be input into it, outside it’s realm of control to do all of it’s “cognition” is not really a conciousness. Once you rely on someone esle to provide all of your sensory input, and you are regulated by a set of rules, no matter how many “layers deep” you are still nothing more than a non-thinking entity that regurgitates premade forms, no better than a calculator.

For a true conciousness to be formed it can’t just be programmed with definate answers and definitions that do not evolve from a common denominator without limiting it’s growth and thought potential.

for example, when you are a toddler, you learn, from your parents or other environment that a flat topped device with supports is called a table. By then witnessing other items similar to that table you can also group them into the common grouping of table. Now later you may learn that there are different types of tables, or different styles, but by learning your first definition, you can still identify them as tables by dedution.

A computer only knows that a table is what you tell it a table it. For each new type of table you have to set new parameters and tell it “this is alo a table” it cannot (currently) make the connection and deduction itself.


To deal with men by force is as impractical as to deal with nature by persuasion.

NanoByte,

You wrote:

Who can say? Neural networks were invented through studies of the brain. The inventors looked at human brains and saw that the neurons were organized in a network like fashion. Clearly the neurons were not communicating with binary weightings between nodes. So we dreamed up a computer model based on a theory of how the brain might be processing information. This model EMULATES the way we BELIEVE the brain processes CERTAIN KINDS of information. Nevertheless, it is merely a model - and an incomplete model, at that. We don’t have the scope here to cover this incompleteness… plus it takes us way off topic.

Well, I’ve not been exposed to every configuration of neural net that has ever been devised, so I won’t say your claim is impossible… However… nerual networks are like filters. They take inputs, the inputs propagate through the network, sometimes feeding back, and eventually produce an output. Like filters, nerual nets are lossy. Every neural network I’ve ever seen suffers from the same shortcomming. The more you train it, the more generalized your outputs become. In real world neural net applications, this kind of generalization is acceptable because the number of outputs is small, relative to the number of inputs. In mathematics, this kind of generalization will result in lack of precision. I am highly doubtful that there are neural nets capable of even simple arbitrary math with any real precision. If you know something to the contrary, I would be grateful if you could point me in that general direction.

Getting back to your real question: How is the brain different than neural nets?

The principle differences that I can think of off the top of my head are:

(1) The brain is capable of procedural computations.
(2) The brain is capable of training itself with no new inputs, through inference.
(3) Neural pathways in the brain are dynamically reconfigurable.
(4) The brain has dynamic access to memory.

I tend to agree… at least until we have a reasonable definition of consciousness that is more than a mere circular reference to awareness.

I draw a distinction between recognizing consciousness in an entity versus that entity actually possessing consciousness. On one hand I think it entirely possible for an entity to be conscious, yet we fail to recognize it. On the other hand, I think it is entirely possible that we may one day find entities (perhaps man-made machines) that we recognize all of the characteristics of consciousness and thus deduce that they have consciousness, yet the consciousness might be merely illusion. Waxing a bit philosophic here: Only the conscious can know that they are thus. Or in other words, only I can attest that I am conscious, you can merely assume that I am or not by my actions.

Ahh… you weren’t paying attention. This is precisely what Nickrz and Mortimer Adler and others of that ilk maintain.
[Warning: Sarcasm Ahead. Please keep to the left]
Furthermore, everytime a non human demonstrates a behavior that was previously, patently human, the rules change and bar is raised for that poor unintelligent creature… Of course, the bar goes up for humans as well, and we have to start excluding some sub class of homo sapiens from humanity… but that’s a small price to pay to preserve our egos.

Profounder words were never spoken…

{{(We’re all in this apart.) }}
Buttle?

As to BurnMeUp:

I’m sorry, but I cannot read, out of a good deal of this post of yours, anything that adds up to saying anything at all, and in those cases where you say something about computers/mechanisms it does not at all represent the present state of the technical art. To wit:


/
Nano, you are missing the point entirely. I was merely trying to demonstrate the fact that conciousness is outside the grasp of non-sentient beings.
_______

What does this say? That you are defining non-sentient beings as not have consciousness? My dictionary defines ‘sentient’ as “Having sense perception; conscious.”


/
Any organisim (or machine for that matter) that requires informtaion to be input into it, outside it’s realm of control to do all of it’s “cognition”
_______

Now how the devil am I supposed to make anything of that phrase (from the first comma to the second comma, which is missing at the end of what’s written above. If you’re trying to say that software today is not sophisticated enough to get its own information from sensors (=senses), and modify its routines to ever more sophisticated complexities dependent upon what it senses, be it weather parameters, where its wheels are or what you’re telling it, you are only at least some 50 yr not up to date.


/
is not really a conciousness. Once you rely on someone esle to provide all of your sensory input,
________

Machines can be set up to acquire whatever portion of their sensory input as is appropriate. You must realize that.


/
and you are regulated by a set of rules, no matter how many “layers deep” you are still nothing more than a non-thinking entity that regurgitates premade forms, no better than a calculator.
_______

You are here simply defining “thinking entities” (which presumably is supposed to include you) as not analyzable into layers of control. Look, you take a squid, whose nervous system, I believe, has been totally mapped as a few 10s of thousands of neurons. You set genetics going – OK, you have to backtrack a bit, because we’re not direct descendants of the squid, but you get the idea: Your brain is nothing but some 100 billion neurons which functions on layers of control built up from your past experience slammed onto a genetic kernel. You think you have magic in your head? So we haven’t built robots anywhere near as sophisticated as our brains, but that just allows an issue of degree. Robots today can train to a pretty sophisticated level. How are you to pick a magic number below which, as applied to the layers of control, separates the “sentient” or “conscious” entity from the not such?


/
For a true conciousness to be formed it can’t just be programmed with definate answers and definitions that do not evolve from a common denominator without limiting it’s growth and thought potential.
_______

For something like 50 yr, computers haven’t had to be programmed like that. There programs can flex just like your neurons, particularly if they have ANNs artificial neural nets) within them. And sensors connected to them can influence such flexing, and effectors can affected what’s out there to be sensed. . .just like yours do. . .or better. :wink:


/
for example, when you are a toddler, you learn, from your parents or other environment
that a flat topped device with supports is called a table. By then witnessing other items similar to that table you can also group them into the common grouping of table. Now later you may learn that there are different types of tables, or different styles, but by learning your first definition, you can still identify them as tables by dedution.

A computer only knows that a table is what you tell it a table it. For each new type of
table you have to set new parameters and tell it “this is alo a table” it cannot (currently) make the connection and deduction itself.
_______

No way! What I’ve been telling you is that computers can figure all those same sorts of things out. It was done way back with conventional AI, and has since been done much better with ANNs. I forget the various names used for such organizing, but its all just stereotyping. And the computer can come up with some other categorization, instances of which you might, in some its forms, claim are tables, while in others of its forms, desks, say – if that computer thinks it can do a better job of dispensing office equipment, say, with such cross-categorization. From what I see, I can’t imagine there is anything you could come up with that would seem a reasonable disqualifier of the robot capabilities of today from “thinking” – whatever way you would define such process.


As to JoeyBlades:

“Who can say? Neural networks were invented through studies of the brain. The inventors
looked at human brains and saw that the neurons were organized in a network like fashion.
Clearly the neurons were not communicating with binary weightings between nodes. So we
dreamed up a computer model based on a theory of how the brain might be processing
information. This model EMULATES the way we BELIEVE the brain processes. . .”

They have made various kinds of analog nodes also. I don’t care if they make them out of K’nex pieces and they run at turtle speed. You are apparently defining ‘think’, what can be said to be ‘conscious’, etc. simply as what is built exactly like it is in humans, or at least as in the highest animal you want to empathize with. It seems to me that people more naturally “think” about “thought” as a functional process, and that’s how I would like to talk about it. Defining it as what natural neurons do is like defining ‘heading a state’ as what a president does. . .and then, in the case of the present such entity, finding out you have to include. . .well, playing the saxophone.

Granted that no ANNs (at least that I’ve heard of) are today anywhere near as complex as typical human neurons. Maybe you would claim that a phonograph or a hunk fo semiconductor memory containing musical information in MP3 format run by software in an electronic gizmo with a speaker or earphone cannot play music, only humans can play music. Guns are too unconscious; only people kill people. . .right? WRONG! Open manholes kill people without “emulating”, “simulating” or otherwise mimicking death wishes. But if you can empathize with a gun or a manhole. . . I say the objective functionality is only a matter of degree, and an MP3-player comes pretty damn close to the equal of the functionality of the human in playing music, yet the implementations of the corresponding renditions, in a case where the human plays a dijiridoo and the track on the MP3-player is recorded from a dijiridoo played either by a human or an automaton, are exceedingly different. So I would say ‘thinking’, as with ‘playing music’ can be seen to have an objective meaning sharable with whatever provides and an equivalent such functionality as that of the human, and that ‘emulation’ or ‘simulation’ or ‘aping’ serve no purpose in being interposed modifiers. It’s just that the MP3-player’s music playing would be seen at present as of a high level of the art, while the computer’s/robot’s level of the art of thinking would vary in its level of equivalency to that of humans, dependent on what kind of thinking is being thought of.

And I think of human thinking as including thinking that thinks only of unemotional thoughts like adding two numbers or deciding whether its faster to get from SF to NY via Chicago or via St. Louis, or whatever. Granted that computers today don’t have to eat to replicate their cells, hide from other preying machines or engage in any activity to replicate wholes, which bioimperative activities in humans have required emotional accessories in the brain.

Given the above, one would be faced with an attempt at empathization with the computer to get to its ‘thinking’ as defined sub

NanoByte,

You wrote:

I don’t think you caught my point, but it was probably a bad toss, on my part. No I don’t think that a sentient machine has to have the same organization as the human brain. I’ve already said in this thread that I don’t necessarily agree with Searle’s Chinese room argument and poked fun at the anthropocentrist philosophers. The point I was trying to make is that I believe a fundamental element of the brain’s organization is the ability to reorganize. Today, while we technically know how to build ANNs that could reorganize, we don’t have a clue how to do this in a predictable and productive way. Therefore, as a model to the human brain, ANNs fall short of the mark. I am careful not to extend this observation to suggest that this spells doom for all of AI or even for the future of neural nets. Perhaps we will someday improve ANN models by adding mechanisms to restructure themselves. I’m not ruling anything out…

No, but we’re talking ANNs here, which are not procedural.

No. ANNs require new inputs and expected outputs in order to be trained. They never hypothesize and test theories to learn new things of their own volition.

As I’ve already said, I agree that this is feasible, from a technical standpoint. However, today we don’t have a grasp of the mechanics or logic required to do this. Could it happen, eventually? I think it probably will.

Again, reminding you that I’m talking about neural nets here. The memory in a neural net has no discernable organization that would allow you to extract conceptual data. Sure you can address the data, but to use this data to try and make an observation about why an ANN made a particular choice is not practical.

In these last few posts I’ve only been maintaining that ANNs do not provide the total solution. I don’t maintain (or even believe) that it is inconceivable that machines might, someday become sentient (I don’t think it’s likely, but it’s not inconceivable). Certainly, if we could combine many of the modern computing technologies in just the right way, we could probably fool most people into thinking that we had an intelligent machine. ANNs for vision and other types of pattern matching, fuzzy logic for quick decision making, natural language parsers with conceptual dependancy for communication, a conventional ALU for straight math and procedural logic, some sort of central processor that could partition the tasks and transform data from one system to another, mix in a bit of OOP, shake vigorously, let stand for 30 minutes… Viola! The vegamatic 2001. It slices. It dices. It helps your kids with their homework. It feeds the dog. It watches TV. It refuses to do windows. Hey, it must be intelligent!

I see your point. I’ve been limiting my assumptions to the observable universe as we know it (and that’s not sarcasm). If you and I were connected in some psychic manner, then you could possibly KNOW my consciousness.

Finally, you asked:

Good point. Model was, perhaps, a poor choice of words. Do you prefer “parallel” or “analogue”? I’m open.

[[Look, you take a squid, whose nervous system, I believe, has been totally mapped as a few 10s of thousands of neurons. You set genetics going – OK, you have to backtrack a bit, because we’re not direct descendants of the squid, but you get the idea: Your brain is nothing but some 100 billion neurons which functions on layers of control built up from your past experience slammed onto a genetic kernel. ]] Nanobyte

Not questioning your neuron count, necessarily, but it’s been pretty well demonstrated that squid and other cephalopods are highly intelligent vis a vis their cousins in the higher mammal kingdom (for example).

Just sticking up for the poor, forgotten cuttlefish.

I somehow overlooked this lame cop-out:
{{1. You have me on the “ineffable” statement. The word should have been “ethereal.” Substituing the wrong word for the right thought is one of those effable qualities that separate humans from the lower animals.}} Nickerz
In other words, we’re quite a bit smarter.

{{2. I have neither the time nor the inclination to launch into a debate over the
characteristics of free will vs. determined behavior. Either you understand the concepts or you don’t, imho.}}
Don’t flatter yourself that diagreement with you means a lack of understanding. There is no question, as a matter of raw fact, that higher mammals (at least) have the ability to choose what they will do, as a general proposition. Example: dog sees cat; dogs ears perk up; dog takes excited step toward cat; human says “no”; dog refrains from chasing cat (or dog blows it off and chases cat anyway, with full knowledge of what “no” means.
{{3. Denying the validity of my question “Are any animals aware of their own mortality?” by stating “Some humans are not aware of theirs” is wholly specious argument. I refuse to indulge such meandering. }}
You just like to posit unprovable conditions for debate. <g>
{{4. I’ll refer you back to my original post and the book citations therein. If any of you are truly interested in this opposing point of view, read them. I dare you. I suspect none of you arguing for lower animal intellect wish to have your ridiculous assertions and ideas challenged by anyone who has the philosophy and facts in ready form, as Dr. Adler does. I don’t need to rewrite his ideas here.}}
Someone who goes around prattling about this and that being “what separates man from the animals” really ought to be careful about labeling those who disagree “ridiculous.” Capice?

Personally, I think indoor toilets are what separate man from the animals. Animals dump in anxiety, vulnerable to their enemies; man dumps in comfort and solitude, turning the event into a religious experience (excremeditation).

Well, JB, congratulations. You are the only person I have ever persuaded to read Adler. Thank you for that kind consideration of my opinion. It’s a pity you decided to pick out a few nits you disagreed with and failed to see the whole of his arguments. (You refuted none of the statements you quoted, just
termed them “education biased” (whatever that means) as if that somehow negated the truth inherent therein).

It’s also a pity you either did not read, or chose not to discuss Dr. Adler’s refutation of animal sign “language,” which is by far the most convincing proof for my original argument extant in his works, the argument I originally juxtaposed against Cecil’s comments.

The capper for me was your capper. It demonstrates your almost complete misunderstanding of the last Adler quote:

To which you reply:

I find your reply absolutely baffling. I cannot but think you do not know what philosophy is, my friend.
Or science, for that matter.